00:00:00.001 Started by upstream project "autotest-per-patch" build number 131109 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.074 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.075 using credential 00000000-0000-0000-0000-000000000002 00:00:00.076 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.115 Fetching changes from the remote Git repository 00:00:00.117 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.168 Using shallow fetch with depth 1 00:00:00.168 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.168 > git --version # timeout=10 00:00:00.221 > git --version # 'git version 2.39.2' 00:00:00.221 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.256 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.256 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.473 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.485 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.497 Checking out Revision bb1b9bfed281c179b06b3c39bbc702302ccac514 (FETCH_HEAD) 00:00:06.497 > git config core.sparsecheckout # timeout=10 00:00:06.509 > git read-tree -mu HEAD # timeout=10 00:00:06.526 > git checkout -f bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=5 00:00:06.544 Commit message: "scripts/kid: add issue 3551" 00:00:06.545 > git rev-list --no-walk bb1b9bfed281c179b06b3c39bbc702302ccac514 # timeout=10 00:00:06.652 [Pipeline] Start of Pipeline 00:00:06.665 [Pipeline] library 00:00:06.667 Loading library shm_lib@master 00:00:06.667 Library shm_lib@master is cached. Copying from home. 00:00:06.682 [Pipeline] node 00:00:06.691 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.692 [Pipeline] { 00:00:06.702 [Pipeline] catchError 00:00:06.703 [Pipeline] { 00:00:06.715 [Pipeline] wrap 00:00:06.723 [Pipeline] { 00:00:06.731 [Pipeline] stage 00:00:06.733 [Pipeline] { (Prologue) 00:00:06.942 [Pipeline] sh 00:00:07.228 + logger -p user.info -t JENKINS-CI 00:00:07.246 [Pipeline] echo 00:00:07.248 Node: CYP12 00:00:07.257 [Pipeline] sh 00:00:07.558 [Pipeline] setCustomBuildProperty 00:00:07.566 [Pipeline] echo 00:00:07.567 Cleanup processes 00:00:07.570 [Pipeline] sh 00:00:07.852 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.852 3059418 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.865 [Pipeline] sh 00:00:08.150 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.150 ++ grep -v 'sudo pgrep' 00:00:08.150 ++ awk '{print $1}' 00:00:08.150 + sudo kill -9 00:00:08.150 + true 00:00:08.164 [Pipeline] cleanWs 00:00:08.174 [WS-CLEANUP] Deleting project workspace... 00:00:08.174 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.181 [WS-CLEANUP] done 00:00:08.185 [Pipeline] setCustomBuildProperty 00:00:08.196 [Pipeline] sh 00:00:08.480 + sudo git config --global --replace-all safe.directory '*' 00:00:08.562 [Pipeline] httpRequest 00:00:08.928 [Pipeline] echo 00:00:08.930 Sorcerer 10.211.164.101 is alive 00:00:08.940 [Pipeline] retry 00:00:08.943 [Pipeline] { 00:00:08.957 [Pipeline] httpRequest 00:00:08.961 HttpMethod: GET 00:00:08.962 URL: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:08.962 Sending request to url: http://10.211.164.101/packages/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:08.980 Response Code: HTTP/1.1 200 OK 00:00:08.980 Success: Status code 200 is in the accepted range: 200,404 00:00:08.981 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:23.839 [Pipeline] } 00:00:23.856 [Pipeline] // retry 00:00:23.863 [Pipeline] sh 00:00:24.151 + tar --no-same-owner -xf jbp_bb1b9bfed281c179b06b3c39bbc702302ccac514.tar.gz 00:00:24.166 [Pipeline] httpRequest 00:00:24.618 [Pipeline] echo 00:00:24.620 Sorcerer 10.211.164.101 is alive 00:00:24.631 [Pipeline] retry 00:00:24.633 [Pipeline] { 00:00:24.647 [Pipeline] httpRequest 00:00:24.652 HttpMethod: GET 00:00:24.652 URL: http://10.211.164.101/packages/spdk_118c273ab00223876ab4db154df39d03fa644c55.tar.gz 00:00:24.653 Sending request to url: http://10.211.164.101/packages/spdk_118c273ab00223876ab4db154df39d03fa644c55.tar.gz 00:00:24.674 Response Code: HTTP/1.1 200 OK 00:00:24.675 Success: Status code 200 is in the accepted range: 200,404 00:00:24.675 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_118c273ab00223876ab4db154df39d03fa644c55.tar.gz 00:00:58.966 [Pipeline] } 00:00:58.984 [Pipeline] // retry 00:00:58.992 [Pipeline] sh 00:00:59.279 + tar --no-same-owner -xf spdk_118c273ab00223876ab4db154df39d03fa644c55.tar.gz 00:01:01.835 [Pipeline] sh 00:01:02.122 + git -C spdk log --oneline -n5 00:01:02.122 118c273ab event: enable changing back to static scheduler 00:01:02.122 7e6d8079b lib/fuse_dispatcher: destruction sequence fixed 00:01:02.122 8dce86055 module/vfu_device/vfu_virtio_fs: EP destruction fixed 00:01:02.122 8af292d89 lib/vfu_tgt: spdk_vfu_endpoint_ops.destruct retries 00:01:02.122 56f409f31 module/vfu_device/vfu_virtio_rpc: log fixed 00:01:02.132 [Pipeline] } 00:01:02.141 [Pipeline] // stage 00:01:02.146 [Pipeline] stage 00:01:02.148 [Pipeline] { (Prepare) 00:01:02.158 [Pipeline] writeFile 00:01:02.169 [Pipeline] sh 00:01:02.450 + logger -p user.info -t JENKINS-CI 00:01:02.462 [Pipeline] sh 00:01:02.748 + logger -p user.info -t JENKINS-CI 00:01:02.759 [Pipeline] sh 00:01:03.044 + cat autorun-spdk.conf 00:01:03.044 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.044 SPDK_TEST_NVMF=1 00:01:03.044 SPDK_TEST_NVME_CLI=1 00:01:03.044 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.044 SPDK_TEST_NVMF_NICS=e810 00:01:03.044 SPDK_TEST_VFIOUSER=1 00:01:03.044 SPDK_RUN_UBSAN=1 00:01:03.044 NET_TYPE=phy 00:01:03.051 RUN_NIGHTLY=0 00:01:03.056 [Pipeline] readFile 00:01:03.079 [Pipeline] withEnv 00:01:03.081 [Pipeline] { 00:01:03.094 [Pipeline] sh 00:01:03.383 + set -ex 00:01:03.383 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:03.383 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:03.383 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.383 ++ SPDK_TEST_NVMF=1 00:01:03.383 ++ SPDK_TEST_NVME_CLI=1 00:01:03.383 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.383 ++ SPDK_TEST_NVMF_NICS=e810 00:01:03.383 ++ SPDK_TEST_VFIOUSER=1 00:01:03.383 ++ SPDK_RUN_UBSAN=1 00:01:03.383 ++ NET_TYPE=phy 00:01:03.383 ++ RUN_NIGHTLY=0 00:01:03.383 + case $SPDK_TEST_NVMF_NICS in 00:01:03.383 + DRIVERS=ice 00:01:03.383 + [[ tcp == \r\d\m\a ]] 00:01:03.383 + [[ -n ice ]] 00:01:03.383 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:03.383 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:03.383 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:03.383 rmmod: ERROR: Module irdma is not currently loaded 00:01:03.383 rmmod: ERROR: Module i40iw is not currently loaded 00:01:03.383 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:03.383 + true 00:01:03.383 + for D in $DRIVERS 00:01:03.383 + sudo modprobe ice 00:01:03.383 + exit 0 00:01:03.392 [Pipeline] } 00:01:03.405 [Pipeline] // withEnv 00:01:03.410 [Pipeline] } 00:01:03.423 [Pipeline] // stage 00:01:03.431 [Pipeline] catchError 00:01:03.433 [Pipeline] { 00:01:03.446 [Pipeline] timeout 00:01:03.446 Timeout set to expire in 1 hr 0 min 00:01:03.448 [Pipeline] { 00:01:03.460 [Pipeline] stage 00:01:03.462 [Pipeline] { (Tests) 00:01:03.475 [Pipeline] sh 00:01:03.761 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.761 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.761 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.761 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:03.761 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:03.761 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:03.761 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:03.761 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:03.761 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:03.761 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:03.761 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:03.761 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.761 + source /etc/os-release 00:01:03.761 ++ NAME='Fedora Linux' 00:01:03.761 ++ VERSION='39 (Cloud Edition)' 00:01:03.761 ++ ID=fedora 00:01:03.761 ++ VERSION_ID=39 00:01:03.761 ++ VERSION_CODENAME= 00:01:03.761 ++ PLATFORM_ID=platform:f39 00:01:03.761 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:03.761 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:03.761 ++ LOGO=fedora-logo-icon 00:01:03.761 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:03.761 ++ HOME_URL=https://fedoraproject.org/ 00:01:03.761 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:03.761 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:03.761 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:03.761 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:03.761 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:03.761 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:03.761 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:03.761 ++ SUPPORT_END=2024-11-12 00:01:03.761 ++ VARIANT='Cloud Edition' 00:01:03.761 ++ VARIANT_ID=cloud 00:01:03.761 + uname -a 00:01:03.761 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:03.761 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:07.059 Hugepages 00:01:07.059 node hugesize free / total 00:01:07.059 node0 1048576kB 0 / 0 00:01:07.059 node0 2048kB 0 / 0 00:01:07.059 node1 1048576kB 0 / 0 00:01:07.059 node1 2048kB 0 / 0 00:01:07.059 00:01:07.059 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:07.059 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:07.059 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:07.059 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:07.059 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:07.059 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:07.059 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:07.059 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:07.059 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:07.059 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:07.059 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:07.059 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:07.059 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:07.059 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:07.059 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:07.059 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:07.059 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:07.059 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:07.059 + rm -f /tmp/spdk-ld-path 00:01:07.059 + source autorun-spdk.conf 00:01:07.059 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.059 ++ SPDK_TEST_NVMF=1 00:01:07.059 ++ SPDK_TEST_NVME_CLI=1 00:01:07.059 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.059 ++ SPDK_TEST_NVMF_NICS=e810 00:01:07.059 ++ SPDK_TEST_VFIOUSER=1 00:01:07.059 ++ SPDK_RUN_UBSAN=1 00:01:07.059 ++ NET_TYPE=phy 00:01:07.059 ++ RUN_NIGHTLY=0 00:01:07.059 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:07.059 + [[ -n '' ]] 00:01:07.059 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.059 + for M in /var/spdk/build-*-manifest.txt 00:01:07.059 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:07.059 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.059 + for M in /var/spdk/build-*-manifest.txt 00:01:07.059 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:07.059 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.059 + for M in /var/spdk/build-*-manifest.txt 00:01:07.059 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:07.059 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.059 ++ uname 00:01:07.059 + [[ Linux == \L\i\n\u\x ]] 00:01:07.059 + sudo dmesg -T 00:01:07.059 + sudo dmesg --clear 00:01:07.059 + dmesg_pid=3060419 00:01:07.059 + [[ Fedora Linux == FreeBSD ]] 00:01:07.059 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:07.059 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:07.059 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:07.059 + [[ -x /usr/src/fio-static/fio ]] 00:01:07.059 + export FIO_BIN=/usr/src/fio-static/fio 00:01:07.059 + FIO_BIN=/usr/src/fio-static/fio 00:01:07.059 + sudo dmesg -Tw 00:01:07.059 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:07.059 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:07.059 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:07.059 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:07.059 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:07.059 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:07.059 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:07.060 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:07.060 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:07.060 Test configuration: 00:01:07.060 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.060 SPDK_TEST_NVMF=1 00:01:07.060 SPDK_TEST_NVME_CLI=1 00:01:07.060 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.060 SPDK_TEST_NVMF_NICS=e810 00:01:07.060 SPDK_TEST_VFIOUSER=1 00:01:07.060 SPDK_RUN_UBSAN=1 00:01:07.060 NET_TYPE=phy 00:01:07.060 RUN_NIGHTLY=0 14:14:47 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:07.060 14:14:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:07.060 14:14:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:07.060 14:14:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:07.060 14:14:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:07.060 14:14:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:07.060 14:14:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.060 14:14:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.060 14:14:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.060 14:14:47 -- paths/export.sh@5 -- $ export PATH 00:01:07.060 14:14:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.060 14:14:47 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:07.060 14:14:47 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:07.060 14:14:47 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728908087.XXXXXX 00:01:07.060 14:14:47 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728908087.eOjKxE 00:01:07.060 14:14:47 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:07.060 14:14:47 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:07.060 14:14:47 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:07.060 14:14:47 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:07.060 14:14:47 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:07.060 14:14:47 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:07.060 14:14:47 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:07.060 14:14:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.060 14:14:47 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:07.060 14:14:47 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:07.060 14:14:47 -- pm/common@17 -- $ local monitor 00:01:07.060 14:14:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:07.060 14:14:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:07.060 14:14:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:07.060 14:14:47 -- pm/common@21 -- $ date +%s 00:01:07.060 14:14:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:07.060 14:14:47 -- pm/common@25 -- $ sleep 1 00:01:07.060 14:14:47 -- pm/common@21 -- $ date +%s 00:01:07.060 14:14:47 -- pm/common@21 -- $ date +%s 00:01:07.060 14:14:47 -- pm/common@21 -- $ date +%s 00:01:07.060 14:14:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728908087 00:01:07.060 14:14:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728908087 00:01:07.060 14:14:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728908087 00:01:07.060 14:14:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728908087 00:01:07.320 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728908087_collect-vmstat.pm.log 00:01:07.320 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728908087_collect-cpu-load.pm.log 00:01:07.320 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728908087_collect-cpu-temp.pm.log 00:01:07.320 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728908087_collect-bmc-pm.bmc.pm.log 00:01:08.263 14:14:48 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:08.263 14:14:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:08.263 14:14:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:08.263 14:14:48 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:08.263 14:14:48 -- spdk/autobuild.sh@16 -- $ date -u 00:01:08.263 Mon Oct 14 12:14:48 PM UTC 2024 00:01:08.263 14:14:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:08.263 v25.01-pre-63-g118c273ab 00:01:08.263 14:14:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:08.263 14:14:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:08.263 14:14:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:08.263 14:14:48 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:08.263 14:14:48 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:08.263 14:14:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.263 ************************************ 00:01:08.263 START TEST ubsan 00:01:08.263 ************************************ 00:01:08.263 14:14:48 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:08.263 using ubsan 00:01:08.263 00:01:08.263 real 0m0.001s 00:01:08.263 user 0m0.000s 00:01:08.263 sys 0m0.000s 00:01:08.263 14:14:48 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:08.263 14:14:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:08.263 ************************************ 00:01:08.263 END TEST ubsan 00:01:08.263 ************************************ 00:01:08.263 14:14:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:08.263 14:14:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:08.263 14:14:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:08.263 14:14:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:08.263 14:14:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:08.263 14:14:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:08.263 14:14:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:08.263 14:14:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:08.263 14:14:48 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:08.524 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:08.524 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:08.786 Using 'verbs' RDMA provider 00:01:24.631 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:36.859 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:36.859 Creating mk/config.mk...done. 00:01:36.859 Creating mk/cc.flags.mk...done. 00:01:36.859 Type 'make' to build. 00:01:36.859 14:15:17 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:36.859 14:15:17 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:36.859 14:15:17 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:36.859 14:15:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.859 ************************************ 00:01:36.859 START TEST make 00:01:36.860 ************************************ 00:01:36.860 14:15:17 make -- common/autotest_common.sh@1125 -- $ make -j144 00:01:37.120 make[1]: Nothing to be done for 'all'. 00:01:38.501 The Meson build system 00:01:38.501 Version: 1.5.0 00:01:38.501 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:38.501 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:38.501 Build type: native build 00:01:38.501 Project name: libvfio-user 00:01:38.501 Project version: 0.0.1 00:01:38.501 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:38.501 C linker for the host machine: cc ld.bfd 2.40-14 00:01:38.501 Host machine cpu family: x86_64 00:01:38.501 Host machine cpu: x86_64 00:01:38.501 Run-time dependency threads found: YES 00:01:38.501 Library dl found: YES 00:01:38.501 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:38.501 Run-time dependency json-c found: YES 0.17 00:01:38.501 Run-time dependency cmocka found: YES 1.1.7 00:01:38.501 Program pytest-3 found: NO 00:01:38.501 Program flake8 found: NO 00:01:38.501 Program misspell-fixer found: NO 00:01:38.501 Program restructuredtext-lint found: NO 00:01:38.501 Program valgrind found: YES (/usr/bin/valgrind) 00:01:38.501 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:38.501 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:38.501 Compiler for C supports arguments -Wwrite-strings: YES 00:01:38.501 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:38.501 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:38.501 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:38.501 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:38.501 Build targets in project: 8 00:01:38.501 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:38.501 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:38.501 00:01:38.501 libvfio-user 0.0.1 00:01:38.501 00:01:38.501 User defined options 00:01:38.501 buildtype : debug 00:01:38.501 default_library: shared 00:01:38.501 libdir : /usr/local/lib 00:01:38.501 00:01:38.501 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:38.760 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:38.760 [1/37] Compiling C object samples/null.p/null.c.o 00:01:38.760 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:38.760 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:38.760 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:38.760 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:38.760 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:38.760 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:38.760 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:38.760 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:38.760 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:38.760 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:38.761 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:38.761 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:38.761 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:38.761 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:38.761 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:38.761 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:38.761 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:38.761 [19/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:38.761 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:38.761 [21/37] Compiling C object samples/server.p/server.c.o 00:01:38.761 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:38.761 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:38.761 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:38.761 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:38.761 [26/37] Compiling C object samples/client.p/client.c.o 00:01:38.761 [27/37] Linking target samples/client 00:01:39.021 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:39.021 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:39.021 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:39.021 [31/37] Linking target test/unit_tests 00:01:39.021 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:39.021 [33/37] Linking target samples/gpio-pci-idio-16 00:01:39.021 [34/37] Linking target samples/null 00:01:39.021 [35/37] Linking target samples/server 00:01:39.021 [36/37] Linking target samples/lspci 00:01:39.021 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:39.282 INFO: autodetecting backend as ninja 00:01:39.282 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:39.282 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:39.543 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:39.543 ninja: no work to do. 00:01:46.128 The Meson build system 00:01:46.128 Version: 1.5.0 00:01:46.128 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:46.128 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:46.128 Build type: native build 00:01:46.128 Program cat found: YES (/usr/bin/cat) 00:01:46.128 Project name: DPDK 00:01:46.128 Project version: 24.03.0 00:01:46.128 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:46.128 C linker for the host machine: cc ld.bfd 2.40-14 00:01:46.128 Host machine cpu family: x86_64 00:01:46.128 Host machine cpu: x86_64 00:01:46.128 Message: ## Building in Developer Mode ## 00:01:46.128 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:46.128 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:46.128 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:46.128 Program python3 found: YES (/usr/bin/python3) 00:01:46.128 Program cat found: YES (/usr/bin/cat) 00:01:46.128 Compiler for C supports arguments -march=native: YES 00:01:46.128 Checking for size of "void *" : 8 00:01:46.128 Checking for size of "void *" : 8 (cached) 00:01:46.128 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:46.128 Library m found: YES 00:01:46.128 Library numa found: YES 00:01:46.128 Has header "numaif.h" : YES 00:01:46.128 Library fdt found: NO 00:01:46.128 Library execinfo found: NO 00:01:46.128 Has header "execinfo.h" : YES 00:01:46.128 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:46.128 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:46.128 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:46.128 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:46.128 Run-time dependency openssl found: YES 3.1.1 00:01:46.128 Run-time dependency libpcap found: YES 1.10.4 00:01:46.128 Has header "pcap.h" with dependency libpcap: YES 00:01:46.128 Compiler for C supports arguments -Wcast-qual: YES 00:01:46.128 Compiler for C supports arguments -Wdeprecated: YES 00:01:46.128 Compiler for C supports arguments -Wformat: YES 00:01:46.129 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:46.129 Compiler for C supports arguments -Wformat-security: NO 00:01:46.129 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.129 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:46.129 Compiler for C supports arguments -Wnested-externs: YES 00:01:46.129 Compiler for C supports arguments -Wold-style-definition: YES 00:01:46.129 Compiler for C supports arguments -Wpointer-arith: YES 00:01:46.129 Compiler for C supports arguments -Wsign-compare: YES 00:01:46.129 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:46.129 Compiler for C supports arguments -Wundef: YES 00:01:46.129 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.129 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:46.129 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:46.129 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.129 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:46.129 Program objdump found: YES (/usr/bin/objdump) 00:01:46.129 Compiler for C supports arguments -mavx512f: YES 00:01:46.129 Checking if "AVX512 checking" compiles: YES 00:01:46.129 Fetching value of define "__SSE4_2__" : 1 00:01:46.129 Fetching value of define "__AES__" : 1 00:01:46.129 Fetching value of define "__AVX__" : 1 00:01:46.129 Fetching value of define "__AVX2__" : 1 00:01:46.129 Fetching value of define "__AVX512BW__" : 1 00:01:46.129 Fetching value of define "__AVX512CD__" : 1 00:01:46.129 Fetching value of define "__AVX512DQ__" : 1 00:01:46.129 Fetching value of define "__AVX512F__" : 1 00:01:46.129 Fetching value of define "__AVX512VL__" : 1 00:01:46.129 Fetching value of define "__PCLMUL__" : 1 00:01:46.129 Fetching value of define "__RDRND__" : 1 00:01:46.129 Fetching value of define "__RDSEED__" : 1 00:01:46.129 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:46.129 Fetching value of define "__znver1__" : (undefined) 00:01:46.129 Fetching value of define "__znver2__" : (undefined) 00:01:46.129 Fetching value of define "__znver3__" : (undefined) 00:01:46.129 Fetching value of define "__znver4__" : (undefined) 00:01:46.129 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:46.129 Message: lib/log: Defining dependency "log" 00:01:46.129 Message: lib/kvargs: Defining dependency "kvargs" 00:01:46.129 Message: lib/telemetry: Defining dependency "telemetry" 00:01:46.129 Checking for function "getentropy" : NO 00:01:46.129 Message: lib/eal: Defining dependency "eal" 00:01:46.129 Message: lib/ring: Defining dependency "ring" 00:01:46.129 Message: lib/rcu: Defining dependency "rcu" 00:01:46.129 Message: lib/mempool: Defining dependency "mempool" 00:01:46.129 Message: lib/mbuf: Defining dependency "mbuf" 00:01:46.129 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:46.129 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:46.129 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:46.129 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:46.129 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:46.129 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:46.129 Compiler for C supports arguments -mpclmul: YES 00:01:46.129 Compiler for C supports arguments -maes: YES 00:01:46.129 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.129 Compiler for C supports arguments -mavx512bw: YES 00:01:46.129 Compiler for C supports arguments -mavx512dq: YES 00:01:46.129 Compiler for C supports arguments -mavx512vl: YES 00:01:46.129 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:46.129 Compiler for C supports arguments -mavx2: YES 00:01:46.129 Compiler for C supports arguments -mavx: YES 00:01:46.129 Message: lib/net: Defining dependency "net" 00:01:46.129 Message: lib/meter: Defining dependency "meter" 00:01:46.129 Message: lib/ethdev: Defining dependency "ethdev" 00:01:46.129 Message: lib/pci: Defining dependency "pci" 00:01:46.129 Message: lib/cmdline: Defining dependency "cmdline" 00:01:46.129 Message: lib/hash: Defining dependency "hash" 00:01:46.129 Message: lib/timer: Defining dependency "timer" 00:01:46.129 Message: lib/compressdev: Defining dependency "compressdev" 00:01:46.129 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:46.129 Message: lib/dmadev: Defining dependency "dmadev" 00:01:46.129 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:46.129 Message: lib/power: Defining dependency "power" 00:01:46.129 Message: lib/reorder: Defining dependency "reorder" 00:01:46.129 Message: lib/security: Defining dependency "security" 00:01:46.129 Has header "linux/userfaultfd.h" : YES 00:01:46.129 Has header "linux/vduse.h" : YES 00:01:46.129 Message: lib/vhost: Defining dependency "vhost" 00:01:46.129 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:46.129 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:46.129 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:46.129 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:46.129 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:46.129 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:46.129 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:46.129 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:46.129 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:46.129 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:46.129 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:46.129 Configuring doxy-api-html.conf using configuration 00:01:46.129 Configuring doxy-api-man.conf using configuration 00:01:46.129 Program mandb found: YES (/usr/bin/mandb) 00:01:46.129 Program sphinx-build found: NO 00:01:46.129 Configuring rte_build_config.h using configuration 00:01:46.129 Message: 00:01:46.129 ================= 00:01:46.129 Applications Enabled 00:01:46.129 ================= 00:01:46.129 00:01:46.129 apps: 00:01:46.129 00:01:46.129 00:01:46.129 Message: 00:01:46.129 ================= 00:01:46.129 Libraries Enabled 00:01:46.129 ================= 00:01:46.129 00:01:46.129 libs: 00:01:46.129 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:46.129 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:46.129 cryptodev, dmadev, power, reorder, security, vhost, 00:01:46.129 00:01:46.129 Message: 00:01:46.129 =============== 00:01:46.129 Drivers Enabled 00:01:46.129 =============== 00:01:46.129 00:01:46.129 common: 00:01:46.129 00:01:46.129 bus: 00:01:46.129 pci, vdev, 00:01:46.129 mempool: 00:01:46.129 ring, 00:01:46.129 dma: 00:01:46.129 00:01:46.129 net: 00:01:46.129 00:01:46.129 crypto: 00:01:46.129 00:01:46.129 compress: 00:01:46.129 00:01:46.129 vdpa: 00:01:46.129 00:01:46.129 00:01:46.129 Message: 00:01:46.129 ================= 00:01:46.129 Content Skipped 00:01:46.129 ================= 00:01:46.129 00:01:46.129 apps: 00:01:46.129 dumpcap: explicitly disabled via build config 00:01:46.129 graph: explicitly disabled via build config 00:01:46.129 pdump: explicitly disabled via build config 00:01:46.129 proc-info: explicitly disabled via build config 00:01:46.129 test-acl: explicitly disabled via build config 00:01:46.129 test-bbdev: explicitly disabled via build config 00:01:46.129 test-cmdline: explicitly disabled via build config 00:01:46.129 test-compress-perf: explicitly disabled via build config 00:01:46.129 test-crypto-perf: explicitly disabled via build config 00:01:46.129 test-dma-perf: explicitly disabled via build config 00:01:46.129 test-eventdev: explicitly disabled via build config 00:01:46.129 test-fib: explicitly disabled via build config 00:01:46.129 test-flow-perf: explicitly disabled via build config 00:01:46.129 test-gpudev: explicitly disabled via build config 00:01:46.129 test-mldev: explicitly disabled via build config 00:01:46.129 test-pipeline: explicitly disabled via build config 00:01:46.129 test-pmd: explicitly disabled via build config 00:01:46.129 test-regex: explicitly disabled via build config 00:01:46.129 test-sad: explicitly disabled via build config 00:01:46.129 test-security-perf: explicitly disabled via build config 00:01:46.129 00:01:46.129 libs: 00:01:46.129 argparse: explicitly disabled via build config 00:01:46.129 metrics: explicitly disabled via build config 00:01:46.129 acl: explicitly disabled via build config 00:01:46.129 bbdev: explicitly disabled via build config 00:01:46.129 bitratestats: explicitly disabled via build config 00:01:46.129 bpf: explicitly disabled via build config 00:01:46.129 cfgfile: explicitly disabled via build config 00:01:46.129 distributor: explicitly disabled via build config 00:01:46.129 efd: explicitly disabled via build config 00:01:46.129 eventdev: explicitly disabled via build config 00:01:46.129 dispatcher: explicitly disabled via build config 00:01:46.129 gpudev: explicitly disabled via build config 00:01:46.129 gro: explicitly disabled via build config 00:01:46.129 gso: explicitly disabled via build config 00:01:46.129 ip_frag: explicitly disabled via build config 00:01:46.129 jobstats: explicitly disabled via build config 00:01:46.129 latencystats: explicitly disabled via build config 00:01:46.129 lpm: explicitly disabled via build config 00:01:46.129 member: explicitly disabled via build config 00:01:46.129 pcapng: explicitly disabled via build config 00:01:46.129 rawdev: explicitly disabled via build config 00:01:46.129 regexdev: explicitly disabled via build config 00:01:46.129 mldev: explicitly disabled via build config 00:01:46.129 rib: explicitly disabled via build config 00:01:46.130 sched: explicitly disabled via build config 00:01:46.130 stack: explicitly disabled via build config 00:01:46.130 ipsec: explicitly disabled via build config 00:01:46.130 pdcp: explicitly disabled via build config 00:01:46.130 fib: explicitly disabled via build config 00:01:46.130 port: explicitly disabled via build config 00:01:46.130 pdump: explicitly disabled via build config 00:01:46.130 table: explicitly disabled via build config 00:01:46.130 pipeline: explicitly disabled via build config 00:01:46.130 graph: explicitly disabled via build config 00:01:46.130 node: explicitly disabled via build config 00:01:46.130 00:01:46.130 drivers: 00:01:46.130 common/cpt: not in enabled drivers build config 00:01:46.130 common/dpaax: not in enabled drivers build config 00:01:46.130 common/iavf: not in enabled drivers build config 00:01:46.130 common/idpf: not in enabled drivers build config 00:01:46.130 common/ionic: not in enabled drivers build config 00:01:46.130 common/mvep: not in enabled drivers build config 00:01:46.130 common/octeontx: not in enabled drivers build config 00:01:46.130 bus/auxiliary: not in enabled drivers build config 00:01:46.130 bus/cdx: not in enabled drivers build config 00:01:46.130 bus/dpaa: not in enabled drivers build config 00:01:46.130 bus/fslmc: not in enabled drivers build config 00:01:46.130 bus/ifpga: not in enabled drivers build config 00:01:46.130 bus/platform: not in enabled drivers build config 00:01:46.130 bus/uacce: not in enabled drivers build config 00:01:46.130 bus/vmbus: not in enabled drivers build config 00:01:46.130 common/cnxk: not in enabled drivers build config 00:01:46.130 common/mlx5: not in enabled drivers build config 00:01:46.130 common/nfp: not in enabled drivers build config 00:01:46.130 common/nitrox: not in enabled drivers build config 00:01:46.130 common/qat: not in enabled drivers build config 00:01:46.130 common/sfc_efx: not in enabled drivers build config 00:01:46.130 mempool/bucket: not in enabled drivers build config 00:01:46.130 mempool/cnxk: not in enabled drivers build config 00:01:46.130 mempool/dpaa: not in enabled drivers build config 00:01:46.130 mempool/dpaa2: not in enabled drivers build config 00:01:46.130 mempool/octeontx: not in enabled drivers build config 00:01:46.130 mempool/stack: not in enabled drivers build config 00:01:46.130 dma/cnxk: not in enabled drivers build config 00:01:46.130 dma/dpaa: not in enabled drivers build config 00:01:46.130 dma/dpaa2: not in enabled drivers build config 00:01:46.130 dma/hisilicon: not in enabled drivers build config 00:01:46.130 dma/idxd: not in enabled drivers build config 00:01:46.130 dma/ioat: not in enabled drivers build config 00:01:46.130 dma/skeleton: not in enabled drivers build config 00:01:46.130 net/af_packet: not in enabled drivers build config 00:01:46.130 net/af_xdp: not in enabled drivers build config 00:01:46.130 net/ark: not in enabled drivers build config 00:01:46.130 net/atlantic: not in enabled drivers build config 00:01:46.130 net/avp: not in enabled drivers build config 00:01:46.130 net/axgbe: not in enabled drivers build config 00:01:46.130 net/bnx2x: not in enabled drivers build config 00:01:46.130 net/bnxt: not in enabled drivers build config 00:01:46.130 net/bonding: not in enabled drivers build config 00:01:46.130 net/cnxk: not in enabled drivers build config 00:01:46.130 net/cpfl: not in enabled drivers build config 00:01:46.130 net/cxgbe: not in enabled drivers build config 00:01:46.130 net/dpaa: not in enabled drivers build config 00:01:46.130 net/dpaa2: not in enabled drivers build config 00:01:46.130 net/e1000: not in enabled drivers build config 00:01:46.130 net/ena: not in enabled drivers build config 00:01:46.130 net/enetc: not in enabled drivers build config 00:01:46.130 net/enetfec: not in enabled drivers build config 00:01:46.130 net/enic: not in enabled drivers build config 00:01:46.130 net/failsafe: not in enabled drivers build config 00:01:46.130 net/fm10k: not in enabled drivers build config 00:01:46.130 net/gve: not in enabled drivers build config 00:01:46.130 net/hinic: not in enabled drivers build config 00:01:46.130 net/hns3: not in enabled drivers build config 00:01:46.130 net/i40e: not in enabled drivers build config 00:01:46.130 net/iavf: not in enabled drivers build config 00:01:46.130 net/ice: not in enabled drivers build config 00:01:46.130 net/idpf: not in enabled drivers build config 00:01:46.130 net/igc: not in enabled drivers build config 00:01:46.130 net/ionic: not in enabled drivers build config 00:01:46.130 net/ipn3ke: not in enabled drivers build config 00:01:46.130 net/ixgbe: not in enabled drivers build config 00:01:46.130 net/mana: not in enabled drivers build config 00:01:46.130 net/memif: not in enabled drivers build config 00:01:46.130 net/mlx4: not in enabled drivers build config 00:01:46.130 net/mlx5: not in enabled drivers build config 00:01:46.130 net/mvneta: not in enabled drivers build config 00:01:46.130 net/mvpp2: not in enabled drivers build config 00:01:46.130 net/netvsc: not in enabled drivers build config 00:01:46.130 net/nfb: not in enabled drivers build config 00:01:46.130 net/nfp: not in enabled drivers build config 00:01:46.130 net/ngbe: not in enabled drivers build config 00:01:46.130 net/null: not in enabled drivers build config 00:01:46.130 net/octeontx: not in enabled drivers build config 00:01:46.130 net/octeon_ep: not in enabled drivers build config 00:01:46.130 net/pcap: not in enabled drivers build config 00:01:46.130 net/pfe: not in enabled drivers build config 00:01:46.130 net/qede: not in enabled drivers build config 00:01:46.130 net/ring: not in enabled drivers build config 00:01:46.130 net/sfc: not in enabled drivers build config 00:01:46.130 net/softnic: not in enabled drivers build config 00:01:46.130 net/tap: not in enabled drivers build config 00:01:46.130 net/thunderx: not in enabled drivers build config 00:01:46.130 net/txgbe: not in enabled drivers build config 00:01:46.130 net/vdev_netvsc: not in enabled drivers build config 00:01:46.130 net/vhost: not in enabled drivers build config 00:01:46.130 net/virtio: not in enabled drivers build config 00:01:46.130 net/vmxnet3: not in enabled drivers build config 00:01:46.130 raw/*: missing internal dependency, "rawdev" 00:01:46.130 crypto/armv8: not in enabled drivers build config 00:01:46.130 crypto/bcmfs: not in enabled drivers build config 00:01:46.130 crypto/caam_jr: not in enabled drivers build config 00:01:46.130 crypto/ccp: not in enabled drivers build config 00:01:46.130 crypto/cnxk: not in enabled drivers build config 00:01:46.130 crypto/dpaa_sec: not in enabled drivers build config 00:01:46.130 crypto/dpaa2_sec: not in enabled drivers build config 00:01:46.130 crypto/ipsec_mb: not in enabled drivers build config 00:01:46.130 crypto/mlx5: not in enabled drivers build config 00:01:46.130 crypto/mvsam: not in enabled drivers build config 00:01:46.130 crypto/nitrox: not in enabled drivers build config 00:01:46.130 crypto/null: not in enabled drivers build config 00:01:46.130 crypto/octeontx: not in enabled drivers build config 00:01:46.130 crypto/openssl: not in enabled drivers build config 00:01:46.130 crypto/scheduler: not in enabled drivers build config 00:01:46.130 crypto/uadk: not in enabled drivers build config 00:01:46.130 crypto/virtio: not in enabled drivers build config 00:01:46.130 compress/isal: not in enabled drivers build config 00:01:46.130 compress/mlx5: not in enabled drivers build config 00:01:46.130 compress/nitrox: not in enabled drivers build config 00:01:46.130 compress/octeontx: not in enabled drivers build config 00:01:46.130 compress/zlib: not in enabled drivers build config 00:01:46.130 regex/*: missing internal dependency, "regexdev" 00:01:46.130 ml/*: missing internal dependency, "mldev" 00:01:46.130 vdpa/ifc: not in enabled drivers build config 00:01:46.130 vdpa/mlx5: not in enabled drivers build config 00:01:46.130 vdpa/nfp: not in enabled drivers build config 00:01:46.130 vdpa/sfc: not in enabled drivers build config 00:01:46.130 event/*: missing internal dependency, "eventdev" 00:01:46.130 baseband/*: missing internal dependency, "bbdev" 00:01:46.130 gpu/*: missing internal dependency, "gpudev" 00:01:46.130 00:01:46.130 00:01:46.130 Build targets in project: 84 00:01:46.130 00:01:46.130 DPDK 24.03.0 00:01:46.130 00:01:46.130 User defined options 00:01:46.130 buildtype : debug 00:01:46.130 default_library : shared 00:01:46.130 libdir : lib 00:01:46.130 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:46.130 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:46.130 c_link_args : 00:01:46.130 cpu_instruction_set: native 00:01:46.130 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:46.130 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:46.130 enable_docs : false 00:01:46.130 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:46.130 enable_kmods : false 00:01:46.130 max_lcores : 128 00:01:46.130 tests : false 00:01:46.130 00:01:46.130 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.130 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:46.397 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:46.397 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:46.397 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:46.397 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:46.397 [5/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:46.397 [6/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:46.397 [7/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:46.397 [8/267] Linking static target lib/librte_kvargs.a 00:01:46.397 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:46.397 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:46.397 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:46.397 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:46.397 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:46.397 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:46.397 [15/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:46.397 [16/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:46.397 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:46.397 [18/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:46.397 [19/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:46.397 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:46.397 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:46.397 [22/267] Linking static target lib/librte_log.a 00:01:46.397 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:46.397 [24/267] Linking static target lib/librte_pci.a 00:01:46.397 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:46.397 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:46.397 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:46.656 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:46.656 [29/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:46.656 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:46.656 [31/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:46.656 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:46.656 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:46.656 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:46.656 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:46.656 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:46.656 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:46.656 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:46.656 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.656 [40/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.656 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:46.656 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:46.917 [43/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:46.917 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:46.917 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:46.917 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:46.917 [47/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:46.917 [48/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:46.917 [49/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:46.917 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:46.917 [51/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:46.917 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:46.917 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:46.917 [54/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:46.917 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:46.917 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:46.917 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:46.917 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:46.917 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:46.917 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:46.917 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:46.917 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:46.917 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:46.917 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:46.917 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:46.917 [66/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:46.917 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:46.917 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:46.917 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:46.917 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:46.917 [71/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:46.917 [72/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:46.917 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:46.917 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:46.917 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:46.917 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:46.918 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:46.918 [78/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:46.918 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:46.918 [80/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:46.918 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:46.918 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:46.918 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:46.918 [84/267] Linking static target lib/librte_meter.a 00:01:46.918 [85/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:46.918 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:46.918 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:46.918 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:46.918 [89/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:46.918 [90/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:46.918 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:46.918 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:46.918 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:46.918 [94/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:46.918 [95/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:46.918 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:46.918 [97/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:46.918 [98/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:46.918 [99/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:46.918 [100/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:46.918 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:46.918 [102/267] Linking static target lib/librte_timer.a 00:01:46.918 [103/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:46.918 [104/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:46.918 [105/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:46.918 [106/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:46.918 [107/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:46.918 [108/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:46.918 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:46.918 [110/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:46.918 [111/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:46.918 [112/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:46.918 [113/267] Linking static target lib/librte_telemetry.a 00:01:46.918 [114/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:46.918 [115/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:46.918 [116/267] Linking static target lib/librte_ring.a 00:01:46.918 [117/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:46.918 [118/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:46.918 [119/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:46.918 [120/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:46.918 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:46.918 [122/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:46.918 [123/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:46.918 [124/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:46.918 [125/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:46.918 [126/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:46.918 [127/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:46.918 [128/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:46.918 [129/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:46.918 [130/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:46.918 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:46.918 [132/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:46.918 [133/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:46.918 [134/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:46.918 [135/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:46.918 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:46.918 [137/267] Linking static target lib/librte_cmdline.a 00:01:46.918 [138/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.918 [139/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:46.918 [140/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:46.918 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:46.918 [142/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:46.918 [143/267] Linking static target lib/librte_compressdev.a 00:01:46.918 [144/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:46.918 [145/267] Linking static target lib/librte_dmadev.a 00:01:47.178 [146/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:47.178 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:47.178 [148/267] Linking target lib/librte_log.so.24.1 00:01:47.178 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:47.178 [150/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:47.178 [151/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:47.178 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.178 [153/267] Linking static target lib/librte_reorder.a 00:01:47.178 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:47.178 [155/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:47.178 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:47.178 [157/267] Linking static target lib/librte_mempool.a 00:01:47.178 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:47.178 [159/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:47.178 [160/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:47.178 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:47.178 [162/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:47.178 [163/267] Linking static target lib/librte_net.a 00:01:47.178 [164/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:47.178 [165/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:47.178 [166/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:47.178 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:47.178 [168/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:47.178 [169/267] Linking static target lib/librte_eal.a 00:01:47.178 [170/267] Linking static target lib/librte_rcu.a 00:01:47.178 [171/267] Linking static target lib/librte_power.a 00:01:47.178 [172/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:47.178 [173/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:47.178 [174/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:47.178 [175/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:47.178 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:47.178 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:47.178 [178/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:47.178 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:47.178 [180/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:47.178 [181/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:47.178 [182/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.178 [183/267] Linking static target lib/librte_security.a 00:01:47.178 [184/267] Linking static target lib/librte_mbuf.a 00:01:47.178 [185/267] Linking target lib/librte_kvargs.so.24.1 00:01:47.178 [186/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:47.178 [187/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:47.178 [188/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.178 [189/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.178 [190/267] Linking static target drivers/librte_bus_vdev.a 00:01:47.440 [191/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.440 [192/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:47.440 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:47.440 [194/267] Linking static target lib/librte_hash.a 00:01:47.440 [195/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:47.440 [196/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:47.440 [197/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.440 [198/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:47.440 [199/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.440 [200/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.440 [201/267] Linking static target drivers/librte_mempool_ring.a 00:01:47.440 [202/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.440 [203/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:47.440 [204/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.440 [205/267] Linking static target drivers/librte_bus_pci.a 00:01:47.440 [206/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.440 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:47.700 [208/267] Linking static target lib/librte_cryptodev.a 00:01:47.700 [209/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.700 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.700 [211/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.700 [212/267] Linking target lib/librte_telemetry.so.24.1 00:01:47.700 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.700 [214/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.700 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:47.700 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.960 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.960 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:47.960 [219/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:47.960 [220/267] Linking static target lib/librte_ethdev.a 00:01:48.221 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.221 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.221 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.221 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.482 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.482 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.482 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:48.482 [228/267] Linking static target lib/librte_vhost.a 00:01:49.865 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.436 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.016 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.398 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.398 [233/267] Linking target lib/librte_eal.so.24.1 00:01:58.398 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:58.398 [235/267] Linking target lib/librte_dmadev.so.24.1 00:01:58.398 [236/267] Linking target lib/librte_ring.so.24.1 00:01:58.398 [237/267] Linking target lib/librte_pci.so.24.1 00:01:58.398 [238/267] Linking target lib/librte_meter.so.24.1 00:01:58.398 [239/267] Linking target lib/librte_timer.so.24.1 00:01:58.398 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:58.658 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:58.658 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:58.658 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:58.658 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:58.658 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:58.658 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:58.658 [247/267] Linking target lib/librte_rcu.so.24.1 00:01:58.658 [248/267] Linking target lib/librte_mempool.so.24.1 00:01:58.658 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:58.658 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:58.658 [251/267] Linking target lib/librte_mbuf.so.24.1 00:01:58.658 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:58.918 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:58.918 [254/267] Linking target lib/librte_net.so.24.1 00:01:58.918 [255/267] Linking target lib/librte_compressdev.so.24.1 00:01:58.918 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:58.918 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:59.178 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:59.178 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:59.178 [260/267] Linking target lib/librte_cmdline.so.24.1 00:01:59.178 [261/267] Linking target lib/librte_hash.so.24.1 00:01:59.178 [262/267] Linking target lib/librte_security.so.24.1 00:01:59.178 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:59.178 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:59.178 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:59.437 [266/267] Linking target lib/librte_power.so.24.1 00:01:59.437 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:59.437 INFO: autodetecting backend as ninja 00:01:59.437 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:03.636 CC lib/log/log.o 00:02:03.636 CC lib/log/log_flags.o 00:02:03.636 CC lib/log/log_deprecated.o 00:02:03.636 CC lib/ut/ut.o 00:02:03.636 CC lib/ut_mock/mock.o 00:02:03.636 LIB libspdk_log.a 00:02:03.636 LIB libspdk_ut.a 00:02:03.636 LIB libspdk_ut_mock.a 00:02:03.636 SO libspdk_ut.so.2.0 00:02:03.636 SO libspdk_log.so.7.1 00:02:03.636 SO libspdk_ut_mock.so.6.0 00:02:03.636 SYMLINK libspdk_ut.so 00:02:03.636 SYMLINK libspdk_log.so 00:02:03.636 SYMLINK libspdk_ut_mock.so 00:02:03.636 CC lib/util/base64.o 00:02:03.636 CC lib/util/bit_array.o 00:02:03.636 CC lib/util/cpuset.o 00:02:03.636 CC lib/util/crc16.o 00:02:03.636 CC lib/util/crc32.o 00:02:03.636 CC lib/util/crc32c.o 00:02:03.636 CC lib/util/crc32_ieee.o 00:02:03.636 CC lib/dma/dma.o 00:02:03.636 CC lib/util/crc64.o 00:02:03.636 CC lib/util/dif.o 00:02:03.636 CC lib/util/fd.o 00:02:03.636 CC lib/util/fd_group.o 00:02:03.636 CC lib/util/file.o 00:02:03.636 CC lib/util/hexlify.o 00:02:03.636 CC lib/util/iov.o 00:02:03.636 CC lib/util/math.o 00:02:03.636 CC lib/util/net.o 00:02:03.636 CC lib/util/pipe.o 00:02:03.636 CXX lib/trace_parser/trace.o 00:02:03.636 CC lib/ioat/ioat.o 00:02:03.636 CC lib/util/strerror_tls.o 00:02:03.636 CC lib/util/string.o 00:02:03.636 CC lib/util/uuid.o 00:02:03.636 CC lib/util/xor.o 00:02:03.636 CC lib/util/zipf.o 00:02:03.636 CC lib/util/md5.o 00:02:03.897 CC lib/vfio_user/host/vfio_user_pci.o 00:02:03.897 CC lib/vfio_user/host/vfio_user.o 00:02:03.897 LIB libspdk_dma.a 00:02:03.897 SO libspdk_dma.so.5.0 00:02:03.897 LIB libspdk_ioat.a 00:02:03.897 SO libspdk_ioat.so.7.0 00:02:03.897 SYMLINK libspdk_dma.so 00:02:04.157 SYMLINK libspdk_ioat.so 00:02:04.157 LIB libspdk_vfio_user.a 00:02:04.157 SO libspdk_vfio_user.so.5.0 00:02:04.157 LIB libspdk_util.a 00:02:04.157 SYMLINK libspdk_vfio_user.so 00:02:04.157 SO libspdk_util.so.10.0 00:02:04.417 SYMLINK libspdk_util.so 00:02:04.417 LIB libspdk_trace_parser.a 00:02:04.417 SO libspdk_trace_parser.so.6.0 00:02:04.681 SYMLINK libspdk_trace_parser.so 00:02:04.681 CC lib/rdma_provider/common.o 00:02:04.681 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:04.681 CC lib/json/json_parse.o 00:02:04.681 CC lib/json/json_util.o 00:02:04.681 CC lib/json/json_write.o 00:02:04.681 CC lib/env_dpdk/pci.o 00:02:04.681 CC lib/env_dpdk/env.o 00:02:04.681 CC lib/env_dpdk/memory.o 00:02:04.681 CC lib/vmd/vmd.o 00:02:04.681 CC lib/rdma_utils/rdma_utils.o 00:02:04.681 CC lib/vmd/led.o 00:02:04.681 CC lib/idxd/idxd.o 00:02:04.681 CC lib/env_dpdk/init.o 00:02:04.681 CC lib/env_dpdk/threads.o 00:02:04.681 CC lib/idxd/idxd_user.o 00:02:04.681 CC lib/conf/conf.o 00:02:04.681 CC lib/env_dpdk/pci_ioat.o 00:02:04.681 CC lib/idxd/idxd_kernel.o 00:02:04.681 CC lib/env_dpdk/pci_virtio.o 00:02:04.681 CC lib/env_dpdk/pci_vmd.o 00:02:04.681 CC lib/env_dpdk/pci_idxd.o 00:02:04.681 CC lib/env_dpdk/pci_event.o 00:02:04.681 CC lib/env_dpdk/sigbus_handler.o 00:02:04.681 CC lib/env_dpdk/pci_dpdk.o 00:02:04.681 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:04.681 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:04.942 LIB libspdk_rdma_provider.a 00:02:04.942 SO libspdk_rdma_provider.so.6.0 00:02:04.942 LIB libspdk_conf.a 00:02:04.942 LIB libspdk_rdma_utils.a 00:02:04.942 SYMLINK libspdk_rdma_provider.so 00:02:04.942 LIB libspdk_json.a 00:02:04.942 SO libspdk_conf.so.6.0 00:02:04.942 SO libspdk_rdma_utils.so.1.0 00:02:05.202 SO libspdk_json.so.6.0 00:02:05.202 SYMLINK libspdk_conf.so 00:02:05.202 SYMLINK libspdk_rdma_utils.so 00:02:05.202 SYMLINK libspdk_json.so 00:02:05.202 LIB libspdk_idxd.a 00:02:05.202 LIB libspdk_vmd.a 00:02:05.464 SO libspdk_idxd.so.12.1 00:02:05.464 SO libspdk_vmd.so.6.0 00:02:05.464 SYMLINK libspdk_idxd.so 00:02:05.464 SYMLINK libspdk_vmd.so 00:02:05.464 CC lib/jsonrpc/jsonrpc_server.o 00:02:05.464 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:05.464 CC lib/jsonrpc/jsonrpc_client.o 00:02:05.464 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:05.725 LIB libspdk_jsonrpc.a 00:02:05.725 SO libspdk_jsonrpc.so.6.0 00:02:05.986 SYMLINK libspdk_jsonrpc.so 00:02:05.986 LIB libspdk_env_dpdk.a 00:02:05.986 SO libspdk_env_dpdk.so.15.0 00:02:06.246 SYMLINK libspdk_env_dpdk.so 00:02:06.246 CC lib/rpc/rpc.o 00:02:06.506 LIB libspdk_rpc.a 00:02:06.506 SO libspdk_rpc.so.6.0 00:02:06.506 SYMLINK libspdk_rpc.so 00:02:07.077 CC lib/keyring/keyring.o 00:02:07.077 CC lib/keyring/keyring_rpc.o 00:02:07.077 CC lib/trace/trace.o 00:02:07.077 CC lib/trace/trace_flags.o 00:02:07.077 CC lib/trace/trace_rpc.o 00:02:07.077 CC lib/notify/notify.o 00:02:07.077 CC lib/notify/notify_rpc.o 00:02:07.077 LIB libspdk_notify.a 00:02:07.077 SO libspdk_notify.so.6.0 00:02:07.077 LIB libspdk_keyring.a 00:02:07.077 LIB libspdk_trace.a 00:02:07.336 SO libspdk_keyring.so.2.0 00:02:07.336 SYMLINK libspdk_notify.so 00:02:07.336 SO libspdk_trace.so.11.0 00:02:07.336 SYMLINK libspdk_keyring.so 00:02:07.336 SYMLINK libspdk_trace.so 00:02:07.596 CC lib/thread/thread.o 00:02:07.596 CC lib/thread/iobuf.o 00:02:07.596 CC lib/sock/sock_rpc.o 00:02:07.596 CC lib/sock/sock.o 00:02:08.167 LIB libspdk_sock.a 00:02:08.167 SO libspdk_sock.so.10.0 00:02:08.167 SYMLINK libspdk_sock.so 00:02:08.427 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:08.427 CC lib/nvme/nvme_ctrlr.o 00:02:08.427 CC lib/nvme/nvme_fabric.o 00:02:08.427 CC lib/nvme/nvme_ns_cmd.o 00:02:08.427 CC lib/nvme/nvme_ns.o 00:02:08.427 CC lib/nvme/nvme_pcie_common.o 00:02:08.427 CC lib/nvme/nvme.o 00:02:08.427 CC lib/nvme/nvme_pcie.o 00:02:08.427 CC lib/nvme/nvme_qpair.o 00:02:08.427 CC lib/nvme/nvme_quirks.o 00:02:08.427 CC lib/nvme/nvme_transport.o 00:02:08.427 CC lib/nvme/nvme_discovery.o 00:02:08.427 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:08.427 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:08.427 CC lib/nvme/nvme_tcp.o 00:02:08.427 CC lib/nvme/nvme_opal.o 00:02:08.427 CC lib/nvme/nvme_io_msg.o 00:02:08.427 CC lib/nvme/nvme_poll_group.o 00:02:08.427 CC lib/nvme/nvme_zns.o 00:02:08.427 CC lib/nvme/nvme_stubs.o 00:02:08.427 CC lib/nvme/nvme_auth.o 00:02:08.427 CC lib/nvme/nvme_cuse.o 00:02:08.427 CC lib/nvme/nvme_vfio_user.o 00:02:08.427 CC lib/nvme/nvme_rdma.o 00:02:08.996 LIB libspdk_thread.a 00:02:08.996 SO libspdk_thread.so.10.2 00:02:08.996 SYMLINK libspdk_thread.so 00:02:09.256 CC lib/fsdev/fsdev.o 00:02:09.256 CC lib/fsdev/fsdev_io.o 00:02:09.256 CC lib/fsdev/fsdev_rpc.o 00:02:09.256 CC lib/accel/accel.o 00:02:09.256 CC lib/vfu_tgt/tgt_endpoint.o 00:02:09.256 CC lib/accel/accel_sw.o 00:02:09.256 CC lib/vfu_tgt/tgt_rpc.o 00:02:09.256 CC lib/accel/accel_rpc.o 00:02:09.256 CC lib/init/json_config.o 00:02:09.256 CC lib/init/subsystem_rpc.o 00:02:09.256 CC lib/init/rpc.o 00:02:09.256 CC lib/init/subsystem.o 00:02:09.516 CC lib/blob/blobstore.o 00:02:09.516 CC lib/virtio/virtio.o 00:02:09.516 CC lib/virtio/virtio_vhost_user.o 00:02:09.516 CC lib/blob/request.o 00:02:09.516 CC lib/virtio/virtio_vfio_user.o 00:02:09.516 CC lib/blob/zeroes.o 00:02:09.516 CC lib/virtio/virtio_pci.o 00:02:09.516 CC lib/blob/blob_bs_dev.o 00:02:09.776 LIB libspdk_init.a 00:02:09.776 SO libspdk_init.so.6.0 00:02:09.776 LIB libspdk_virtio.a 00:02:09.776 LIB libspdk_vfu_tgt.a 00:02:09.776 SYMLINK libspdk_init.so 00:02:09.776 SO libspdk_virtio.so.7.0 00:02:09.776 SO libspdk_vfu_tgt.so.3.0 00:02:09.776 SYMLINK libspdk_virtio.so 00:02:09.776 SYMLINK libspdk_vfu_tgt.so 00:02:10.036 LIB libspdk_fsdev.a 00:02:10.036 SO libspdk_fsdev.so.1.0 00:02:10.036 CC lib/event/app.o 00:02:10.036 CC lib/event/reactor.o 00:02:10.036 CC lib/event/log_rpc.o 00:02:10.036 CC lib/event/app_rpc.o 00:02:10.036 CC lib/event/scheduler_static.o 00:02:10.036 SYMLINK libspdk_fsdev.so 00:02:10.297 LIB libspdk_nvme.a 00:02:10.297 LIB libspdk_accel.a 00:02:10.297 SO libspdk_accel.so.16.0 00:02:10.559 SO libspdk_nvme.so.14.0 00:02:10.559 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:10.559 SYMLINK libspdk_accel.so 00:02:10.559 LIB libspdk_event.a 00:02:10.559 SO libspdk_event.so.14.0 00:02:10.559 SYMLINK libspdk_event.so 00:02:10.820 SYMLINK libspdk_nvme.so 00:02:10.820 CC lib/bdev/bdev.o 00:02:10.820 CC lib/bdev/bdev_rpc.o 00:02:10.820 CC lib/bdev/bdev_zone.o 00:02:10.820 CC lib/bdev/part.o 00:02:10.820 CC lib/bdev/scsi_nvme.o 00:02:11.082 LIB libspdk_fuse_dispatcher.a 00:02:11.082 SO libspdk_fuse_dispatcher.so.1.0 00:02:11.082 SYMLINK libspdk_fuse_dispatcher.so 00:02:12.024 LIB libspdk_blob.a 00:02:12.024 SO libspdk_blob.so.11.0 00:02:12.286 SYMLINK libspdk_blob.so 00:02:12.547 CC lib/blobfs/blobfs.o 00:02:12.547 CC lib/lvol/lvol.o 00:02:12.547 CC lib/blobfs/tree.o 00:02:13.119 LIB libspdk_bdev.a 00:02:13.119 SO libspdk_bdev.so.17.0 00:02:13.380 LIB libspdk_blobfs.a 00:02:13.380 SYMLINK libspdk_bdev.so 00:02:13.380 SO libspdk_blobfs.so.10.0 00:02:13.380 LIB libspdk_lvol.a 00:02:13.380 SO libspdk_lvol.so.10.0 00:02:13.380 SYMLINK libspdk_blobfs.so 00:02:13.380 SYMLINK libspdk_lvol.so 00:02:13.639 CC lib/ftl/ftl_core.o 00:02:13.639 CC lib/ftl/ftl_init.o 00:02:13.639 CC lib/ftl/ftl_layout.o 00:02:13.639 CC lib/ftl/ftl_debug.o 00:02:13.639 CC lib/ftl/ftl_io.o 00:02:13.639 CC lib/ftl/ftl_l2p.o 00:02:13.639 CC lib/ftl/ftl_sb.o 00:02:13.639 CC lib/ftl/ftl_l2p_flat.o 00:02:13.639 CC lib/nvmf/ctrlr.o 00:02:13.639 CC lib/ftl/ftl_nv_cache.o 00:02:13.639 CC lib/ftl/ftl_band_ops.o 00:02:13.639 CC lib/nvmf/ctrlr_discovery.o 00:02:13.639 CC lib/ftl/ftl_band.o 00:02:13.639 CC lib/nvmf/ctrlr_bdev.o 00:02:13.639 CC lib/nvmf/subsystem.o 00:02:13.639 CC lib/ublk/ublk.o 00:02:13.639 CC lib/ftl/ftl_writer.o 00:02:13.639 CC lib/nvmf/nvmf.o 00:02:13.639 CC lib/ublk/ublk_rpc.o 00:02:13.639 CC lib/ftl/ftl_rq.o 00:02:13.639 CC lib/scsi/dev.o 00:02:13.639 CC lib/nvmf/nvmf_rpc.o 00:02:13.639 CC lib/ftl/ftl_reloc.o 00:02:13.639 CC lib/nvmf/transport.o 00:02:13.639 CC lib/scsi/lun.o 00:02:13.639 CC lib/ftl/ftl_l2p_cache.o 00:02:13.639 CC lib/nbd/nbd.o 00:02:13.639 CC lib/nvmf/tcp.o 00:02:13.639 CC lib/scsi/port.o 00:02:13.639 CC lib/scsi/scsi.o 00:02:13.639 CC lib/ftl/ftl_p2l.o 00:02:13.639 CC lib/nbd/nbd_rpc.o 00:02:13.639 CC lib/nvmf/stubs.o 00:02:13.639 CC lib/ftl/ftl_p2l_log.o 00:02:13.639 CC lib/scsi/scsi_bdev.o 00:02:13.639 CC lib/nvmf/mdns_server.o 00:02:13.639 CC lib/scsi/scsi_pr.o 00:02:13.639 CC lib/ftl/mngt/ftl_mngt.o 00:02:13.639 CC lib/nvmf/vfio_user.o 00:02:13.639 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:13.639 CC lib/nvmf/rdma.o 00:02:13.639 CC lib/scsi/scsi_rpc.o 00:02:13.639 CC lib/nvmf/auth.o 00:02:13.639 CC lib/scsi/task.o 00:02:13.639 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:13.639 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:13.639 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:13.639 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:13.639 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:13.639 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:13.639 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:13.639 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:13.639 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:13.639 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:13.639 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:13.639 CC lib/ftl/utils/ftl_conf.o 00:02:13.639 CC lib/ftl/utils/ftl_md.o 00:02:13.639 CC lib/ftl/utils/ftl_mempool.o 00:02:13.639 CC lib/ftl/utils/ftl_bitmap.o 00:02:13.639 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:13.639 CC lib/ftl/utils/ftl_property.o 00:02:13.639 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:13.639 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:13.639 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:13.639 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:13.639 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:13.639 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:13.639 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:13.639 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:13.639 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:13.639 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:13.639 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:13.639 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:13.639 CC lib/ftl/base/ftl_base_bdev.o 00:02:13.639 CC lib/ftl/base/ftl_base_dev.o 00:02:13.639 CC lib/ftl/ftl_trace.o 00:02:14.209 LIB libspdk_nbd.a 00:02:14.209 SO libspdk_nbd.so.7.0 00:02:14.209 LIB libspdk_scsi.a 00:02:14.209 SO libspdk_scsi.so.9.0 00:02:14.209 SYMLINK libspdk_nbd.so 00:02:14.470 LIB libspdk_ublk.a 00:02:14.470 SYMLINK libspdk_scsi.so 00:02:14.470 SO libspdk_ublk.so.3.0 00:02:14.470 SYMLINK libspdk_ublk.so 00:02:14.470 LIB libspdk_ftl.a 00:02:14.731 CC lib/iscsi/conn.o 00:02:14.731 CC lib/iscsi/init_grp.o 00:02:14.731 CC lib/iscsi/iscsi.o 00:02:14.731 CC lib/iscsi/param.o 00:02:14.731 CC lib/iscsi/portal_grp.o 00:02:14.731 CC lib/vhost/vhost.o 00:02:14.731 CC lib/iscsi/tgt_node.o 00:02:14.731 CC lib/vhost/vhost_rpc.o 00:02:14.731 CC lib/iscsi/iscsi_subsystem.o 00:02:14.731 CC lib/vhost/vhost_scsi.o 00:02:14.731 CC lib/iscsi/iscsi_rpc.o 00:02:14.731 CC lib/vhost/vhost_blk.o 00:02:14.731 CC lib/iscsi/task.o 00:02:14.731 CC lib/vhost/rte_vhost_user.o 00:02:14.731 SO libspdk_ftl.so.9.0 00:02:14.991 SYMLINK libspdk_ftl.so 00:02:15.562 LIB libspdk_nvmf.a 00:02:15.562 SO libspdk_nvmf.so.19.0 00:02:15.562 LIB libspdk_vhost.a 00:02:15.824 SO libspdk_vhost.so.8.0 00:02:15.824 SYMLINK libspdk_nvmf.so 00:02:15.824 SYMLINK libspdk_vhost.so 00:02:15.824 LIB libspdk_iscsi.a 00:02:15.824 SO libspdk_iscsi.so.8.0 00:02:16.084 SYMLINK libspdk_iscsi.so 00:02:16.656 CC module/vfu_device/vfu_virtio.o 00:02:16.656 CC module/vfu_device/vfu_virtio_blk.o 00:02:16.656 CC module/env_dpdk/env_dpdk_rpc.o 00:02:16.656 CC module/vfu_device/vfu_virtio_scsi.o 00:02:16.656 CC module/vfu_device/vfu_virtio_rpc.o 00:02:16.656 CC module/vfu_device/vfu_virtio_fs.o 00:02:16.656 CC module/fsdev/aio/fsdev_aio.o 00:02:16.656 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:16.656 CC module/fsdev/aio/linux_aio_mgr.o 00:02:16.916 CC module/keyring/linux/keyring.o 00:02:16.916 CC module/keyring/linux/keyring_rpc.o 00:02:16.916 CC module/accel/ioat/accel_ioat_rpc.o 00:02:16.916 CC module/accel/ioat/accel_ioat.o 00:02:16.916 CC module/blob/bdev/blob_bdev.o 00:02:16.916 CC module/accel/iaa/accel_iaa.o 00:02:16.916 CC module/accel/dsa/accel_dsa.o 00:02:16.916 CC module/accel/iaa/accel_iaa_rpc.o 00:02:16.916 CC module/accel/dsa/accel_dsa_rpc.o 00:02:16.916 CC module/accel/error/accel_error.o 00:02:16.916 CC module/accel/error/accel_error_rpc.o 00:02:16.916 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:16.916 CC module/sock/posix/posix.o 00:02:16.916 LIB libspdk_env_dpdk_rpc.a 00:02:16.916 CC module/keyring/file/keyring.o 00:02:16.916 CC module/scheduler/gscheduler/gscheduler.o 00:02:16.916 CC module/keyring/file/keyring_rpc.o 00:02:16.916 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:16.916 SO libspdk_env_dpdk_rpc.so.6.0 00:02:16.916 SYMLINK libspdk_env_dpdk_rpc.so 00:02:16.916 LIB libspdk_keyring_linux.a 00:02:16.916 LIB libspdk_scheduler_gscheduler.a 00:02:16.916 LIB libspdk_accel_error.a 00:02:16.916 SO libspdk_keyring_linux.so.1.0 00:02:16.916 LIB libspdk_keyring_file.a 00:02:16.916 LIB libspdk_scheduler_dpdk_governor.a 00:02:16.916 LIB libspdk_accel_iaa.a 00:02:16.916 LIB libspdk_accel_ioat.a 00:02:16.916 SO libspdk_accel_error.so.2.0 00:02:16.916 SO libspdk_scheduler_gscheduler.so.4.0 00:02:16.916 SO libspdk_keyring_file.so.2.0 00:02:17.175 LIB libspdk_scheduler_dynamic.a 00:02:17.175 SYMLINK libspdk_keyring_linux.so 00:02:17.175 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:17.175 SO libspdk_accel_iaa.so.3.0 00:02:17.175 SO libspdk_accel_ioat.so.6.0 00:02:17.175 LIB libspdk_blob_bdev.a 00:02:17.175 SO libspdk_scheduler_dynamic.so.4.0 00:02:17.175 SYMLINK libspdk_scheduler_gscheduler.so 00:02:17.175 LIB libspdk_accel_dsa.a 00:02:17.175 SYMLINK libspdk_accel_error.so 00:02:17.175 SYMLINK libspdk_keyring_file.so 00:02:17.175 SO libspdk_blob_bdev.so.11.0 00:02:17.175 SYMLINK libspdk_accel_iaa.so 00:02:17.175 SYMLINK libspdk_accel_ioat.so 00:02:17.175 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:17.175 SO libspdk_accel_dsa.so.5.0 00:02:17.175 SYMLINK libspdk_scheduler_dynamic.so 00:02:17.175 SYMLINK libspdk_blob_bdev.so 00:02:17.175 LIB libspdk_vfu_device.a 00:02:17.175 SYMLINK libspdk_accel_dsa.so 00:02:17.175 SO libspdk_vfu_device.so.3.0 00:02:17.436 SYMLINK libspdk_vfu_device.so 00:02:17.436 LIB libspdk_fsdev_aio.a 00:02:17.436 SO libspdk_fsdev_aio.so.1.0 00:02:17.436 LIB libspdk_sock_posix.a 00:02:17.436 SYMLINK libspdk_fsdev_aio.so 00:02:17.436 SO libspdk_sock_posix.so.6.0 00:02:17.695 SYMLINK libspdk_sock_posix.so 00:02:17.695 CC module/bdev/error/vbdev_error.o 00:02:17.695 CC module/bdev/malloc/bdev_malloc.o 00:02:17.695 CC module/bdev/error/vbdev_error_rpc.o 00:02:17.695 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:17.695 CC module/bdev/lvol/vbdev_lvol.o 00:02:17.695 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:17.695 CC module/bdev/delay/vbdev_delay.o 00:02:17.695 CC module/blobfs/bdev/blobfs_bdev.o 00:02:17.695 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:17.695 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:17.695 CC module/bdev/passthru/vbdev_passthru.o 00:02:17.695 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:17.695 CC module/bdev/raid/bdev_raid.o 00:02:17.695 CC module/bdev/raid/bdev_raid_rpc.o 00:02:17.695 CC module/bdev/raid/bdev_raid_sb.o 00:02:17.695 CC module/bdev/raid/raid0.o 00:02:17.695 CC module/bdev/aio/bdev_aio.o 00:02:17.695 CC module/bdev/raid/raid1.o 00:02:17.695 CC module/bdev/ftl/bdev_ftl.o 00:02:17.695 CC module/bdev/raid/concat.o 00:02:17.695 CC module/bdev/aio/bdev_aio_rpc.o 00:02:17.695 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:17.695 CC module/bdev/gpt/vbdev_gpt.o 00:02:17.695 CC module/bdev/iscsi/bdev_iscsi.o 00:02:17.695 CC module/bdev/gpt/gpt.o 00:02:17.695 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:17.695 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:17.695 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:17.695 CC module/bdev/null/bdev_null.o 00:02:17.695 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:17.695 CC module/bdev/null/bdev_null_rpc.o 00:02:17.695 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:17.695 CC module/bdev/nvme/bdev_nvme.o 00:02:17.695 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:17.695 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:17.695 CC module/bdev/split/vbdev_split.o 00:02:17.695 CC module/bdev/nvme/nvme_rpc.o 00:02:17.695 CC module/bdev/nvme/bdev_mdns_client.o 00:02:17.695 CC module/bdev/split/vbdev_split_rpc.o 00:02:17.695 CC module/bdev/nvme/vbdev_opal.o 00:02:17.695 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:17.695 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:17.955 LIB libspdk_bdev_error.a 00:02:17.955 LIB libspdk_blobfs_bdev.a 00:02:17.955 SO libspdk_bdev_error.so.6.0 00:02:17.955 SO libspdk_blobfs_bdev.so.6.0 00:02:17.955 LIB libspdk_bdev_split.a 00:02:17.955 LIB libspdk_bdev_passthru.a 00:02:17.955 LIB libspdk_bdev_null.a 00:02:17.955 SYMLINK libspdk_bdev_error.so 00:02:17.955 LIB libspdk_bdev_gpt.a 00:02:17.955 SO libspdk_bdev_split.so.6.0 00:02:17.955 LIB libspdk_bdev_ftl.a 00:02:17.955 SYMLINK libspdk_blobfs_bdev.so 00:02:17.955 SO libspdk_bdev_gpt.so.6.0 00:02:17.955 SO libspdk_bdev_passthru.so.6.0 00:02:17.955 SO libspdk_bdev_null.so.6.0 00:02:17.955 SO libspdk_bdev_ftl.so.6.0 00:02:17.955 SYMLINK libspdk_bdev_split.so 00:02:18.216 LIB libspdk_bdev_malloc.a 00:02:18.216 LIB libspdk_bdev_delay.a 00:02:18.216 LIB libspdk_bdev_iscsi.a 00:02:18.216 LIB libspdk_bdev_zone_block.a 00:02:18.216 LIB libspdk_bdev_aio.a 00:02:18.216 SO libspdk_bdev_malloc.so.6.0 00:02:18.216 SYMLINK libspdk_bdev_gpt.so 00:02:18.216 SYMLINK libspdk_bdev_null.so 00:02:18.216 SYMLINK libspdk_bdev_passthru.so 00:02:18.216 SO libspdk_bdev_iscsi.so.6.0 00:02:18.216 SO libspdk_bdev_zone_block.so.6.0 00:02:18.216 SO libspdk_bdev_delay.so.6.0 00:02:18.216 SO libspdk_bdev_aio.so.6.0 00:02:18.216 SYMLINK libspdk_bdev_ftl.so 00:02:18.216 LIB libspdk_bdev_lvol.a 00:02:18.216 SYMLINK libspdk_bdev_zone_block.so 00:02:18.216 SYMLINK libspdk_bdev_malloc.so 00:02:18.216 SYMLINK libspdk_bdev_iscsi.so 00:02:18.216 SYMLINK libspdk_bdev_aio.so 00:02:18.216 SYMLINK libspdk_bdev_delay.so 00:02:18.216 SO libspdk_bdev_lvol.so.6.0 00:02:18.216 LIB libspdk_bdev_virtio.a 00:02:18.216 SO libspdk_bdev_virtio.so.6.0 00:02:18.216 SYMLINK libspdk_bdev_lvol.so 00:02:18.476 SYMLINK libspdk_bdev_virtio.so 00:02:18.737 LIB libspdk_bdev_raid.a 00:02:18.737 SO libspdk_bdev_raid.so.6.0 00:02:18.737 SYMLINK libspdk_bdev_raid.so 00:02:19.679 LIB libspdk_bdev_nvme.a 00:02:19.679 SO libspdk_bdev_nvme.so.7.0 00:02:19.940 SYMLINK libspdk_bdev_nvme.so 00:02:20.511 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:20.511 CC module/event/subsystems/vmd/vmd.o 00:02:20.511 CC module/event/subsystems/iobuf/iobuf.o 00:02:20.511 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:20.511 CC module/event/subsystems/fsdev/fsdev.o 00:02:20.511 CC module/event/subsystems/scheduler/scheduler.o 00:02:20.511 CC module/event/subsystems/keyring/keyring.o 00:02:20.511 CC module/event/subsystems/sock/sock.o 00:02:20.511 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:20.511 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:20.772 LIB libspdk_event_scheduler.a 00:02:20.772 LIB libspdk_event_vfu_tgt.a 00:02:20.772 LIB libspdk_event_sock.a 00:02:20.772 LIB libspdk_event_fsdev.a 00:02:20.772 LIB libspdk_event_vmd.a 00:02:20.772 LIB libspdk_event_keyring.a 00:02:20.772 LIB libspdk_event_vhost_blk.a 00:02:20.772 LIB libspdk_event_iobuf.a 00:02:20.772 SO libspdk_event_vfu_tgt.so.3.0 00:02:20.772 SO libspdk_event_sock.so.5.0 00:02:20.772 SO libspdk_event_fsdev.so.1.0 00:02:20.772 SO libspdk_event_scheduler.so.4.0 00:02:20.772 SO libspdk_event_vmd.so.6.0 00:02:20.772 SO libspdk_event_keyring.so.1.0 00:02:20.772 SO libspdk_event_iobuf.so.3.0 00:02:20.772 SO libspdk_event_vhost_blk.so.3.0 00:02:20.772 SYMLINK libspdk_event_sock.so 00:02:20.772 SYMLINK libspdk_event_scheduler.so 00:02:20.772 SYMLINK libspdk_event_vfu_tgt.so 00:02:20.772 SYMLINK libspdk_event_fsdev.so 00:02:20.772 SYMLINK libspdk_event_vhost_blk.so 00:02:20.772 SYMLINK libspdk_event_vmd.so 00:02:20.772 SYMLINK libspdk_event_keyring.so 00:02:20.772 SYMLINK libspdk_event_iobuf.so 00:02:21.343 CC module/event/subsystems/accel/accel.o 00:02:21.343 LIB libspdk_event_accel.a 00:02:21.343 SO libspdk_event_accel.so.6.0 00:02:21.343 SYMLINK libspdk_event_accel.so 00:02:21.912 CC module/event/subsystems/bdev/bdev.o 00:02:21.912 LIB libspdk_event_bdev.a 00:02:21.912 SO libspdk_event_bdev.so.6.0 00:02:22.173 SYMLINK libspdk_event_bdev.so 00:02:22.433 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:22.433 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:22.433 CC module/event/subsystems/nbd/nbd.o 00:02:22.433 CC module/event/subsystems/ublk/ublk.o 00:02:22.433 CC module/event/subsystems/scsi/scsi.o 00:02:22.694 LIB libspdk_event_nbd.a 00:02:22.694 LIB libspdk_event_ublk.a 00:02:22.694 LIB libspdk_event_scsi.a 00:02:22.694 SO libspdk_event_nbd.so.6.0 00:02:22.694 SO libspdk_event_ublk.so.3.0 00:02:22.694 SO libspdk_event_scsi.so.6.0 00:02:22.694 LIB libspdk_event_nvmf.a 00:02:22.694 SO libspdk_event_nvmf.so.6.0 00:02:22.694 SYMLINK libspdk_event_nbd.so 00:02:22.694 SYMLINK libspdk_event_ublk.so 00:02:22.694 SYMLINK libspdk_event_scsi.so 00:02:22.694 SYMLINK libspdk_event_nvmf.so 00:02:22.955 CC module/event/subsystems/iscsi/iscsi.o 00:02:22.955 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:23.216 LIB libspdk_event_iscsi.a 00:02:23.216 LIB libspdk_event_vhost_scsi.a 00:02:23.216 SO libspdk_event_iscsi.so.6.0 00:02:23.216 SO libspdk_event_vhost_scsi.so.3.0 00:02:23.476 SYMLINK libspdk_event_iscsi.so 00:02:23.476 SYMLINK libspdk_event_vhost_scsi.so 00:02:23.476 SO libspdk.so.6.0 00:02:23.476 SYMLINK libspdk.so 00:02:24.050 TEST_HEADER include/spdk/accel_module.h 00:02:24.050 TEST_HEADER include/spdk/accel.h 00:02:24.050 CC app/spdk_top/spdk_top.o 00:02:24.050 TEST_HEADER include/spdk/barrier.h 00:02:24.050 TEST_HEADER include/spdk/assert.h 00:02:24.050 TEST_HEADER include/spdk/base64.h 00:02:24.050 TEST_HEADER include/spdk/bdev.h 00:02:24.050 CC app/spdk_nvme_identify/identify.o 00:02:24.050 CC app/spdk_lspci/spdk_lspci.o 00:02:24.050 CXX app/trace/trace.o 00:02:24.050 TEST_HEADER include/spdk/bdev_module.h 00:02:24.050 TEST_HEADER include/spdk/bdev_zone.h 00:02:24.050 TEST_HEADER include/spdk/bit_array.h 00:02:24.050 CC app/trace_record/trace_record.o 00:02:24.050 TEST_HEADER include/spdk/bit_pool.h 00:02:24.050 CC test/rpc_client/rpc_client_test.o 00:02:24.050 TEST_HEADER include/spdk/blob_bdev.h 00:02:24.050 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:24.050 CC app/spdk_nvme_discover/discovery_aer.o 00:02:24.050 TEST_HEADER include/spdk/blobfs.h 00:02:24.050 TEST_HEADER include/spdk/blob.h 00:02:24.050 CC app/spdk_nvme_perf/perf.o 00:02:24.050 TEST_HEADER include/spdk/conf.h 00:02:24.050 TEST_HEADER include/spdk/config.h 00:02:24.050 TEST_HEADER include/spdk/cpuset.h 00:02:24.050 TEST_HEADER include/spdk/crc16.h 00:02:24.050 TEST_HEADER include/spdk/crc64.h 00:02:24.050 TEST_HEADER include/spdk/crc32.h 00:02:24.050 TEST_HEADER include/spdk/dif.h 00:02:24.051 TEST_HEADER include/spdk/env_dpdk.h 00:02:24.051 TEST_HEADER include/spdk/dma.h 00:02:24.051 TEST_HEADER include/spdk/endian.h 00:02:24.051 TEST_HEADER include/spdk/env.h 00:02:24.051 TEST_HEADER include/spdk/fd_group.h 00:02:24.051 TEST_HEADER include/spdk/event.h 00:02:24.051 TEST_HEADER include/spdk/file.h 00:02:24.051 TEST_HEADER include/spdk/fd.h 00:02:24.051 TEST_HEADER include/spdk/fsdev.h 00:02:24.051 TEST_HEADER include/spdk/fsdev_module.h 00:02:24.051 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:24.051 TEST_HEADER include/spdk/ftl.h 00:02:24.051 TEST_HEADER include/spdk/hexlify.h 00:02:24.051 TEST_HEADER include/spdk/gpt_spec.h 00:02:24.051 TEST_HEADER include/spdk/histogram_data.h 00:02:24.051 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:24.051 TEST_HEADER include/spdk/idxd_spec.h 00:02:24.051 TEST_HEADER include/spdk/idxd.h 00:02:24.051 TEST_HEADER include/spdk/ioat.h 00:02:24.051 TEST_HEADER include/spdk/init.h 00:02:24.051 TEST_HEADER include/spdk/ioat_spec.h 00:02:24.051 TEST_HEADER include/spdk/iscsi_spec.h 00:02:24.051 TEST_HEADER include/spdk/jsonrpc.h 00:02:24.051 TEST_HEADER include/spdk/json.h 00:02:24.051 TEST_HEADER include/spdk/keyring.h 00:02:24.051 TEST_HEADER include/spdk/keyring_module.h 00:02:24.051 CC app/iscsi_tgt/iscsi_tgt.o 00:02:24.051 TEST_HEADER include/spdk/likely.h 00:02:24.051 TEST_HEADER include/spdk/log.h 00:02:24.051 TEST_HEADER include/spdk/lvol.h 00:02:24.051 TEST_HEADER include/spdk/md5.h 00:02:24.051 TEST_HEADER include/spdk/memory.h 00:02:24.051 CC app/nvmf_tgt/nvmf_main.o 00:02:24.051 TEST_HEADER include/spdk/mmio.h 00:02:24.051 TEST_HEADER include/spdk/notify.h 00:02:24.051 TEST_HEADER include/spdk/net.h 00:02:24.051 TEST_HEADER include/spdk/nbd.h 00:02:24.051 TEST_HEADER include/spdk/nvme.h 00:02:24.051 CC app/spdk_dd/spdk_dd.o 00:02:24.051 TEST_HEADER include/spdk/nvme_intel.h 00:02:24.051 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:24.051 TEST_HEADER include/spdk/nvme_spec.h 00:02:24.051 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:24.051 TEST_HEADER include/spdk/nvme_zns.h 00:02:24.051 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:24.051 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:24.051 TEST_HEADER include/spdk/nvmf_spec.h 00:02:24.051 TEST_HEADER include/spdk/nvmf.h 00:02:24.051 CC app/spdk_tgt/spdk_tgt.o 00:02:24.051 TEST_HEADER include/spdk/nvmf_transport.h 00:02:24.051 TEST_HEADER include/spdk/opal.h 00:02:24.051 TEST_HEADER include/spdk/opal_spec.h 00:02:24.051 TEST_HEADER include/spdk/pci_ids.h 00:02:24.051 TEST_HEADER include/spdk/pipe.h 00:02:24.051 TEST_HEADER include/spdk/reduce.h 00:02:24.051 TEST_HEADER include/spdk/queue.h 00:02:24.051 TEST_HEADER include/spdk/rpc.h 00:02:24.051 TEST_HEADER include/spdk/scsi.h 00:02:24.051 TEST_HEADER include/spdk/scheduler.h 00:02:24.051 TEST_HEADER include/spdk/scsi_spec.h 00:02:24.051 TEST_HEADER include/spdk/sock.h 00:02:24.051 TEST_HEADER include/spdk/stdinc.h 00:02:24.051 TEST_HEADER include/spdk/string.h 00:02:24.051 TEST_HEADER include/spdk/thread.h 00:02:24.051 TEST_HEADER include/spdk/trace.h 00:02:24.051 TEST_HEADER include/spdk/tree.h 00:02:24.051 TEST_HEADER include/spdk/trace_parser.h 00:02:24.051 TEST_HEADER include/spdk/ublk.h 00:02:24.051 TEST_HEADER include/spdk/util.h 00:02:24.051 TEST_HEADER include/spdk/uuid.h 00:02:24.051 TEST_HEADER include/spdk/version.h 00:02:24.051 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:24.051 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:24.051 TEST_HEADER include/spdk/vhost.h 00:02:24.051 TEST_HEADER include/spdk/vmd.h 00:02:24.051 TEST_HEADER include/spdk/xor.h 00:02:24.051 TEST_HEADER include/spdk/zipf.h 00:02:24.051 CXX test/cpp_headers/accel.o 00:02:24.051 CXX test/cpp_headers/accel_module.o 00:02:24.051 CXX test/cpp_headers/assert.o 00:02:24.051 CXX test/cpp_headers/base64.o 00:02:24.051 CXX test/cpp_headers/barrier.o 00:02:24.051 CXX test/cpp_headers/bdev.o 00:02:24.051 CXX test/cpp_headers/bdev_module.o 00:02:24.051 CXX test/cpp_headers/bit_array.o 00:02:24.051 CXX test/cpp_headers/bdev_zone.o 00:02:24.051 CXX test/cpp_headers/blobfs_bdev.o 00:02:24.051 CXX test/cpp_headers/bit_pool.o 00:02:24.051 CXX test/cpp_headers/blob_bdev.o 00:02:24.051 CXX test/cpp_headers/blobfs.o 00:02:24.051 CXX test/cpp_headers/blob.o 00:02:24.051 CXX test/cpp_headers/conf.o 00:02:24.051 CXX test/cpp_headers/config.o 00:02:24.051 CXX test/cpp_headers/crc16.o 00:02:24.051 CXX test/cpp_headers/cpuset.o 00:02:24.051 CXX test/cpp_headers/crc32.o 00:02:24.051 CXX test/cpp_headers/crc64.o 00:02:24.051 CXX test/cpp_headers/dif.o 00:02:24.051 CXX test/cpp_headers/dma.o 00:02:24.051 CXX test/cpp_headers/endian.o 00:02:24.051 CXX test/cpp_headers/env_dpdk.o 00:02:24.051 CXX test/cpp_headers/env.o 00:02:24.051 CXX test/cpp_headers/event.o 00:02:24.051 CXX test/cpp_headers/fd_group.o 00:02:24.051 CXX test/cpp_headers/fd.o 00:02:24.051 CXX test/cpp_headers/fsdev.o 00:02:24.051 CXX test/cpp_headers/file.o 00:02:24.051 CXX test/cpp_headers/fuse_dispatcher.o 00:02:24.051 CXX test/cpp_headers/fsdev_module.o 00:02:24.051 CXX test/cpp_headers/ftl.o 00:02:24.051 CXX test/cpp_headers/gpt_spec.o 00:02:24.051 CXX test/cpp_headers/hexlify.o 00:02:24.051 CXX test/cpp_headers/histogram_data.o 00:02:24.051 CXX test/cpp_headers/idxd.o 00:02:24.051 CXX test/cpp_headers/idxd_spec.o 00:02:24.051 CXX test/cpp_headers/ioat.o 00:02:24.051 CXX test/cpp_headers/init.o 00:02:24.051 CXX test/cpp_headers/iscsi_spec.o 00:02:24.051 CXX test/cpp_headers/ioat_spec.o 00:02:24.051 CXX test/cpp_headers/jsonrpc.o 00:02:24.051 CXX test/cpp_headers/keyring.o 00:02:24.051 CXX test/cpp_headers/keyring_module.o 00:02:24.051 CXX test/cpp_headers/log.o 00:02:24.051 CXX test/cpp_headers/json.o 00:02:24.051 CXX test/cpp_headers/lvol.o 00:02:24.051 CXX test/cpp_headers/likely.o 00:02:24.051 CXX test/cpp_headers/memory.o 00:02:24.051 CXX test/cpp_headers/nbd.o 00:02:24.051 CXX test/cpp_headers/md5.o 00:02:24.051 CXX test/cpp_headers/mmio.o 00:02:24.313 CXX test/cpp_headers/net.o 00:02:24.313 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:24.313 CC examples/ioat/perf/perf.o 00:02:24.313 CXX test/cpp_headers/notify.o 00:02:24.313 CC examples/util/zipf/zipf.o 00:02:24.313 CXX test/cpp_headers/nvme.o 00:02:24.313 CXX test/cpp_headers/nvme_intel.o 00:02:24.313 CXX test/cpp_headers/nvme_ocssd.o 00:02:24.313 CXX test/cpp_headers/nvme_zns.o 00:02:24.313 CXX test/cpp_headers/nvme_spec.o 00:02:24.313 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:24.313 CXX test/cpp_headers/nvmf_cmd.o 00:02:24.313 CXX test/cpp_headers/nvmf.o 00:02:24.313 CXX test/cpp_headers/nvmf_spec.o 00:02:24.313 CXX test/cpp_headers/pci_ids.o 00:02:24.314 CXX test/cpp_headers/opal_spec.o 00:02:24.314 CXX test/cpp_headers/nvmf_transport.o 00:02:24.314 CXX test/cpp_headers/opal.o 00:02:24.314 CC test/app/stub/stub.o 00:02:24.314 CXX test/cpp_headers/pipe.o 00:02:24.314 CC examples/ioat/verify/verify.o 00:02:24.314 CXX test/cpp_headers/reduce.o 00:02:24.314 CXX test/cpp_headers/queue.o 00:02:24.314 CXX test/cpp_headers/rpc.o 00:02:24.314 CXX test/cpp_headers/scheduler.o 00:02:24.314 CXX test/cpp_headers/scsi.o 00:02:24.314 CC test/app/histogram_perf/histogram_perf.o 00:02:24.314 CXX test/cpp_headers/scsi_spec.o 00:02:24.314 CXX test/cpp_headers/string.o 00:02:24.314 CXX test/cpp_headers/stdinc.o 00:02:24.314 CXX test/cpp_headers/sock.o 00:02:24.314 CC test/thread/poller_perf/poller_perf.o 00:02:24.314 CC test/env/memory/memory_ut.o 00:02:24.314 CXX test/cpp_headers/trace_parser.o 00:02:24.314 CXX test/cpp_headers/trace.o 00:02:24.314 CXX test/cpp_headers/thread.o 00:02:24.314 CXX test/cpp_headers/ublk.o 00:02:24.314 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:24.314 CXX test/cpp_headers/vfio_user_pci.o 00:02:24.314 CXX test/cpp_headers/tree.o 00:02:24.314 CXX test/cpp_headers/version.o 00:02:24.314 CXX test/cpp_headers/util.o 00:02:24.314 CXX test/cpp_headers/vhost.o 00:02:24.314 CXX test/cpp_headers/uuid.o 00:02:24.314 CC test/env/pci/pci_ut.o 00:02:24.314 LINK spdk_lspci 00:02:24.314 CXX test/cpp_headers/vfio_user_spec.o 00:02:24.314 CXX test/cpp_headers/vmd.o 00:02:24.314 CXX test/cpp_headers/zipf.o 00:02:24.314 CXX test/cpp_headers/xor.o 00:02:24.314 CC test/env/vtophys/vtophys.o 00:02:24.314 CC app/fio/nvme/fio_plugin.o 00:02:24.314 CC test/app/jsoncat/jsoncat.o 00:02:24.314 CC test/app/bdev_svc/bdev_svc.o 00:02:24.314 CC test/dma/test_dma/test_dma.o 00:02:24.314 CC app/fio/bdev/fio_plugin.o 00:02:24.314 LINK spdk_nvme_discover 00:02:24.314 LINK spdk_trace_record 00:02:24.314 LINK nvmf_tgt 00:02:24.573 LINK interrupt_tgt 00:02:24.573 LINK iscsi_tgt 00:02:24.573 LINK rpc_client_test 00:02:24.573 LINK spdk_tgt 00:02:24.573 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:24.573 CC test/env/mem_callbacks/mem_callbacks.o 00:02:24.573 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:24.573 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:24.573 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:24.832 LINK spdk_dd 00:02:24.832 LINK jsoncat 00:02:24.832 LINK stub 00:02:24.832 LINK spdk_trace 00:02:24.832 LINK bdev_svc 00:02:24.832 LINK zipf 00:02:24.832 LINK vtophys 00:02:24.832 LINK histogram_perf 00:02:24.832 LINK poller_perf 00:02:25.091 LINK env_dpdk_post_init 00:02:25.091 LINK ioat_perf 00:02:25.091 LINK verify 00:02:25.091 LINK spdk_nvme 00:02:25.091 LINK vhost_fuzz 00:02:25.092 LINK nvme_fuzz 00:02:25.092 LINK pci_ut 00:02:25.353 LINK spdk_top 00:02:25.353 CC app/vhost/vhost.o 00:02:25.353 LINK spdk_nvme_perf 00:02:25.353 LINK spdk_nvme_identify 00:02:25.353 LINK test_dma 00:02:25.353 LINK spdk_bdev 00:02:25.353 CC examples/idxd/perf/perf.o 00:02:25.353 CC examples/vmd/led/led.o 00:02:25.353 CC examples/vmd/lsvmd/lsvmd.o 00:02:25.353 LINK mem_callbacks 00:02:25.353 CC examples/sock/hello_world/hello_sock.o 00:02:25.353 CC test/event/reactor/reactor.o 00:02:25.353 CC test/event/reactor_perf/reactor_perf.o 00:02:25.353 CC test/event/event_perf/event_perf.o 00:02:25.353 CC test/event/app_repeat/app_repeat.o 00:02:25.353 CC examples/thread/thread/thread_ex.o 00:02:25.353 CC test/event/scheduler/scheduler.o 00:02:25.353 LINK vhost 00:02:25.613 LINK lsvmd 00:02:25.613 LINK led 00:02:25.613 LINK reactor 00:02:25.613 LINK reactor_perf 00:02:25.613 LINK event_perf 00:02:25.613 LINK app_repeat 00:02:25.613 LINK hello_sock 00:02:25.613 LINK idxd_perf 00:02:25.613 LINK scheduler 00:02:25.613 LINK thread 00:02:25.873 LINK memory_ut 00:02:25.873 CC test/nvme/reserve/reserve.o 00:02:25.873 CC test/nvme/overhead/overhead.o 00:02:25.873 CC test/nvme/sgl/sgl.o 00:02:25.873 CC test/nvme/aer/aer.o 00:02:25.873 CC test/nvme/boot_partition/boot_partition.o 00:02:25.873 CC test/nvme/fused_ordering/fused_ordering.o 00:02:25.873 CC test/nvme/reset/reset.o 00:02:25.873 CC test/nvme/err_injection/err_injection.o 00:02:25.873 CC test/nvme/compliance/nvme_compliance.o 00:02:25.873 CC test/nvme/e2edp/nvme_dp.o 00:02:25.873 CC test/nvme/fdp/fdp.o 00:02:25.873 CC test/nvme/cuse/cuse.o 00:02:25.873 CC test/nvme/startup/startup.o 00:02:25.873 CC test/nvme/simple_copy/simple_copy.o 00:02:25.873 CC test/nvme/connect_stress/connect_stress.o 00:02:25.873 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:25.873 CC test/accel/dif/dif.o 00:02:25.873 CC test/blobfs/mkfs/mkfs.o 00:02:26.134 CC test/lvol/esnap/esnap.o 00:02:26.134 LINK boot_partition 00:02:26.134 LINK startup 00:02:26.134 LINK err_injection 00:02:26.134 LINK doorbell_aers 00:02:26.134 LINK reserve 00:02:26.134 LINK connect_stress 00:02:26.134 CC examples/nvme/reconnect/reconnect.o 00:02:26.134 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:26.134 LINK fused_ordering 00:02:26.134 CC examples/nvme/arbitration/arbitration.o 00:02:26.134 CC examples/nvme/hello_world/hello_world.o 00:02:26.134 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:26.134 CC examples/nvme/abort/abort.o 00:02:26.134 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:26.134 CC examples/nvme/hotplug/hotplug.o 00:02:26.134 LINK aer 00:02:26.134 LINK mkfs 00:02:26.134 LINK reset 00:02:26.134 LINK sgl 00:02:26.134 LINK overhead 00:02:26.134 LINK simple_copy 00:02:26.134 LINK iscsi_fuzz 00:02:26.134 LINK nvme_dp 00:02:26.134 LINK nvme_compliance 00:02:26.395 LINK fdp 00:02:26.395 CC examples/accel/perf/accel_perf.o 00:02:26.395 CC examples/blob/cli/blobcli.o 00:02:26.395 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:26.395 CC examples/blob/hello_world/hello_blob.o 00:02:26.395 LINK cmb_copy 00:02:26.395 LINK pmr_persistence 00:02:26.395 LINK hotplug 00:02:26.395 LINK hello_world 00:02:26.395 LINK reconnect 00:02:26.656 LINK arbitration 00:02:26.656 LINK abort 00:02:26.656 LINK dif 00:02:26.656 LINK nvme_manage 00:02:26.656 LINK hello_fsdev 00:02:26.656 LINK hello_blob 00:02:26.918 LINK accel_perf 00:02:26.918 LINK blobcli 00:02:27.179 LINK cuse 00:02:27.179 CC test/bdev/bdevio/bdevio.o 00:02:27.439 CC examples/bdev/hello_world/hello_bdev.o 00:02:27.439 CC examples/bdev/bdevperf/bdevperf.o 00:02:27.439 LINK bdevio 00:02:27.700 LINK hello_bdev 00:02:28.272 LINK bdevperf 00:02:28.944 CC examples/nvmf/nvmf/nvmf.o 00:02:29.249 LINK nvmf 00:02:30.712 LINK esnap 00:02:30.712 00:02:30.712 real 0m54.026s 00:02:30.712 user 7m47.340s 00:02:30.712 sys 4m24.285s 00:02:30.712 14:16:11 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:30.712 14:16:11 make -- common/autotest_common.sh@10 -- $ set +x 00:02:30.712 ************************************ 00:02:30.712 END TEST make 00:02:30.712 ************************************ 00:02:30.712 14:16:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:30.712 14:16:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:30.712 14:16:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:30.712 14:16:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.712 14:16:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:30.712 14:16:11 -- pm/common@44 -- $ pid=3060450 00:02:30.712 14:16:11 -- pm/common@50 -- $ kill -TERM 3060450 00:02:30.712 14:16:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.712 14:16:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:30.712 14:16:11 -- pm/common@44 -- $ pid=3060451 00:02:30.712 14:16:11 -- pm/common@50 -- $ kill -TERM 3060451 00:02:30.712 14:16:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.712 14:16:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:30.712 14:16:11 -- pm/common@44 -- $ pid=3060453 00:02:30.712 14:16:11 -- pm/common@50 -- $ kill -TERM 3060453 00:02:30.712 14:16:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.712 14:16:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:30.712 14:16:11 -- pm/common@44 -- $ pid=3060477 00:02:30.712 14:16:11 -- pm/common@50 -- $ sudo -E kill -TERM 3060477 00:02:30.974 14:16:11 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:30.974 14:16:11 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:30.974 14:16:11 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:30.974 14:16:11 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:30.974 14:16:11 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:30.974 14:16:11 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:30.974 14:16:11 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:30.974 14:16:11 -- scripts/common.sh@336 -- # IFS=.-: 00:02:30.974 14:16:11 -- scripts/common.sh@336 -- # read -ra ver1 00:02:30.974 14:16:11 -- scripts/common.sh@337 -- # IFS=.-: 00:02:30.974 14:16:11 -- scripts/common.sh@337 -- # read -ra ver2 00:02:30.974 14:16:11 -- scripts/common.sh@338 -- # local 'op=<' 00:02:30.974 14:16:11 -- scripts/common.sh@340 -- # ver1_l=2 00:02:30.974 14:16:11 -- scripts/common.sh@341 -- # ver2_l=1 00:02:30.974 14:16:11 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:30.974 14:16:11 -- scripts/common.sh@344 -- # case "$op" in 00:02:30.974 14:16:11 -- scripts/common.sh@345 -- # : 1 00:02:30.974 14:16:11 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:30.974 14:16:11 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:30.974 14:16:11 -- scripts/common.sh@365 -- # decimal 1 00:02:30.974 14:16:11 -- scripts/common.sh@353 -- # local d=1 00:02:30.974 14:16:11 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:30.974 14:16:11 -- scripts/common.sh@355 -- # echo 1 00:02:30.974 14:16:11 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:30.974 14:16:11 -- scripts/common.sh@366 -- # decimal 2 00:02:30.974 14:16:11 -- scripts/common.sh@353 -- # local d=2 00:02:30.974 14:16:11 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:30.974 14:16:11 -- scripts/common.sh@355 -- # echo 2 00:02:30.974 14:16:11 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:30.974 14:16:11 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:30.974 14:16:11 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:30.974 14:16:11 -- scripts/common.sh@368 -- # return 0 00:02:30.974 14:16:11 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:30.974 14:16:11 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:30.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.974 --rc genhtml_branch_coverage=1 00:02:30.974 --rc genhtml_function_coverage=1 00:02:30.974 --rc genhtml_legend=1 00:02:30.974 --rc geninfo_all_blocks=1 00:02:30.974 --rc geninfo_unexecuted_blocks=1 00:02:30.974 00:02:30.974 ' 00:02:30.974 14:16:11 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:30.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.974 --rc genhtml_branch_coverage=1 00:02:30.974 --rc genhtml_function_coverage=1 00:02:30.974 --rc genhtml_legend=1 00:02:30.974 --rc geninfo_all_blocks=1 00:02:30.974 --rc geninfo_unexecuted_blocks=1 00:02:30.974 00:02:30.974 ' 00:02:30.974 14:16:11 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:30.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.974 --rc genhtml_branch_coverage=1 00:02:30.974 --rc genhtml_function_coverage=1 00:02:30.974 --rc genhtml_legend=1 00:02:30.974 --rc geninfo_all_blocks=1 00:02:30.974 --rc geninfo_unexecuted_blocks=1 00:02:30.974 00:02:30.974 ' 00:02:30.974 14:16:11 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:30.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:30.974 --rc genhtml_branch_coverage=1 00:02:30.974 --rc genhtml_function_coverage=1 00:02:30.974 --rc genhtml_legend=1 00:02:30.974 --rc geninfo_all_blocks=1 00:02:30.974 --rc geninfo_unexecuted_blocks=1 00:02:30.974 00:02:30.974 ' 00:02:30.974 14:16:11 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:30.974 14:16:11 -- nvmf/common.sh@7 -- # uname -s 00:02:30.974 14:16:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:30.974 14:16:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:30.974 14:16:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:30.974 14:16:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:30.974 14:16:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:30.974 14:16:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:30.974 14:16:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:30.974 14:16:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:30.974 14:16:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:30.974 14:16:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:30.974 14:16:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:30.974 14:16:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:30.974 14:16:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:30.974 14:16:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:30.974 14:16:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:30.974 14:16:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:30.974 14:16:11 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:30.974 14:16:11 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:30.974 14:16:11 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:30.974 14:16:11 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:30.974 14:16:11 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:30.974 14:16:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.974 14:16:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.974 14:16:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.974 14:16:11 -- paths/export.sh@5 -- # export PATH 00:02:30.975 14:16:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.975 14:16:11 -- nvmf/common.sh@51 -- # : 0 00:02:30.975 14:16:11 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:30.975 14:16:11 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:30.975 14:16:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:30.975 14:16:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:30.975 14:16:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:30.975 14:16:11 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:30.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:30.975 14:16:11 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:30.975 14:16:11 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:30.975 14:16:11 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:30.975 14:16:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:30.975 14:16:11 -- spdk/autotest.sh@32 -- # uname -s 00:02:30.975 14:16:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:30.975 14:16:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:30.975 14:16:11 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:30.975 14:16:11 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:30.975 14:16:11 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:30.975 14:16:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:30.975 14:16:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:30.975 14:16:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:30.975 14:16:11 -- spdk/autotest.sh@48 -- # udevadm_pid=3126171 00:02:30.975 14:16:11 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:30.975 14:16:11 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:30.975 14:16:11 -- pm/common@17 -- # local monitor 00:02:30.975 14:16:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.975 14:16:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.975 14:16:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.975 14:16:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.975 14:16:11 -- pm/common@21 -- # date +%s 00:02:30.975 14:16:11 -- pm/common@21 -- # date +%s 00:02:30.975 14:16:11 -- pm/common@25 -- # sleep 1 00:02:30.975 14:16:11 -- pm/common@21 -- # date +%s 00:02:30.975 14:16:11 -- pm/common@21 -- # date +%s 00:02:30.975 14:16:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728908171 00:02:30.975 14:16:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728908171 00:02:30.975 14:16:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728908171 00:02:30.975 14:16:11 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728908171 00:02:30.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728908171_collect-cpu-load.pm.log 00:02:30.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728908171_collect-vmstat.pm.log 00:02:30.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728908171_collect-cpu-temp.pm.log 00:02:31.236 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728908171_collect-bmc-pm.bmc.pm.log 00:02:32.179 14:16:12 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:32.179 14:16:12 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:32.179 14:16:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:32.179 14:16:12 -- common/autotest_common.sh@10 -- # set +x 00:02:32.179 14:16:12 -- spdk/autotest.sh@59 -- # create_test_list 00:02:32.179 14:16:12 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:32.179 14:16:12 -- common/autotest_common.sh@10 -- # set +x 00:02:32.179 14:16:12 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:32.179 14:16:12 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.179 14:16:12 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.179 14:16:12 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:32.179 14:16:12 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.179 14:16:12 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:32.179 14:16:12 -- common/autotest_common.sh@1455 -- # uname 00:02:32.179 14:16:12 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:32.179 14:16:12 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:32.179 14:16:12 -- common/autotest_common.sh@1475 -- # uname 00:02:32.179 14:16:12 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:32.179 14:16:12 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:32.179 14:16:12 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:32.179 lcov: LCOV version 1.15 00:02:32.180 14:16:12 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:54.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:54.145 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:02.281 14:16:42 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:02.281 14:16:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:02.281 14:16:42 -- common/autotest_common.sh@10 -- # set +x 00:03:02.281 14:16:42 -- spdk/autotest.sh@78 -- # rm -f 00:03:02.281 14:16:42 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.581 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:05.581 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:05.581 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:05.581 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:05.581 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:05.581 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:05.581 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:05.581 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:05.581 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:05.581 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:05.841 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:05.842 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:05.842 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:05.842 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:05.842 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:05.842 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:05.842 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:06.101 14:16:46 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:06.101 14:16:46 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:06.101 14:16:46 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:06.101 14:16:46 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:06.101 14:16:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:06.101 14:16:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:06.101 14:16:46 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:06.101 14:16:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:06.101 14:16:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:06.101 14:16:46 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:06.101 14:16:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:06.101 14:16:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:06.101 14:16:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:06.101 14:16:46 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:06.101 14:16:46 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:06.101 No valid GPT data, bailing 00:03:06.101 14:16:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:06.101 14:16:46 -- scripts/common.sh@394 -- # pt= 00:03:06.101 14:16:46 -- scripts/common.sh@395 -- # return 1 00:03:06.101 14:16:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:06.101 1+0 records in 00:03:06.101 1+0 records out 00:03:06.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416618 s, 252 MB/s 00:03:06.101 14:16:46 -- spdk/autotest.sh@105 -- # sync 00:03:06.101 14:16:46 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:06.101 14:16:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:06.101 14:16:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:16.092 14:16:55 -- spdk/autotest.sh@111 -- # uname -s 00:03:16.093 14:16:55 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:16.093 14:16:55 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:16.093 14:16:55 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:18.005 Hugepages 00:03:18.005 node hugesize free / total 00:03:18.005 node0 1048576kB 0 / 0 00:03:18.005 node0 2048kB 0 / 0 00:03:18.005 node1 1048576kB 0 / 0 00:03:18.005 node1 2048kB 0 / 0 00:03:18.005 00:03:18.005 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:18.005 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:18.005 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:18.005 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:18.005 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:18.005 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:18.005 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:18.005 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:18.005 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:18.265 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:18.265 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:18.265 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:18.265 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:18.265 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:18.265 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:18.265 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:18.265 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:18.265 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:18.265 14:16:58 -- spdk/autotest.sh@117 -- # uname -s 00:03:18.265 14:16:58 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:18.265 14:16:58 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:18.265 14:16:58 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.570 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:21.571 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:23.481 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:23.741 14:17:04 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:24.681 14:17:05 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:24.681 14:17:05 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:24.681 14:17:05 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:24.681 14:17:05 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:24.681 14:17:05 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:24.681 14:17:05 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:24.681 14:17:05 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:24.681 14:17:05 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:24.681 14:17:05 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:24.681 14:17:05 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:24.681 14:17:05 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:24.681 14:17:05 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.981 Waiting for block devices as requested 00:03:27.981 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:28.240 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:28.240 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:28.240 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:28.500 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:28.500 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:28.500 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:28.760 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:28.760 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:29.019 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:29.019 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:29.019 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:29.019 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:29.279 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:29.279 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:29.279 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:29.539 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:29.799 14:17:10 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:29.799 14:17:10 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:29.799 14:17:10 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:29.799 14:17:10 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:29.799 14:17:10 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:29.799 14:17:10 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:29.799 14:17:10 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:29.799 14:17:10 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:29.799 14:17:10 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:29.799 14:17:10 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:29.799 14:17:10 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:29.799 14:17:10 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:29.799 14:17:10 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:29.799 14:17:10 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:29.799 14:17:10 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:29.799 14:17:10 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:29.799 14:17:10 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:29.799 14:17:10 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:29.799 14:17:10 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:29.799 14:17:10 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:29.799 14:17:10 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:29.799 14:17:10 -- common/autotest_common.sh@1541 -- # continue 00:03:29.799 14:17:10 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:29.799 14:17:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:29.799 14:17:10 -- common/autotest_common.sh@10 -- # set +x 00:03:29.799 14:17:10 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:29.799 14:17:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:29.799 14:17:10 -- common/autotest_common.sh@10 -- # set +x 00:03:29.799 14:17:10 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.133 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:33.133 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:33.133 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:33.133 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:33.133 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:33.133 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:33.133 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:33.133 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:33.393 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:33.393 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:33.393 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:33.393 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:33.393 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:33.393 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:33.393 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:33.393 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:33.393 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:33.653 14:17:14 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:33.653 14:17:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:33.653 14:17:14 -- common/autotest_common.sh@10 -- # set +x 00:03:33.914 14:17:14 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:33.914 14:17:14 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:33.914 14:17:14 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:33.914 14:17:14 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:33.914 14:17:14 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:33.914 14:17:14 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:33.914 14:17:14 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:33.914 14:17:14 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:33.914 14:17:14 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:33.914 14:17:14 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:33.914 14:17:14 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:33.914 14:17:14 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:33.914 14:17:14 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:33.914 14:17:14 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:33.914 14:17:14 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:33.914 14:17:14 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:33.914 14:17:14 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:33.914 14:17:14 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:33.914 14:17:14 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:33.914 14:17:14 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:33.914 14:17:14 -- common/autotest_common.sh@1570 -- # return 0 00:03:33.914 14:17:14 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:33.914 14:17:14 -- common/autotest_common.sh@1578 -- # return 0 00:03:33.914 14:17:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:33.914 14:17:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:33.914 14:17:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:33.914 14:17:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:33.914 14:17:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:33.914 14:17:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:33.914 14:17:14 -- common/autotest_common.sh@10 -- # set +x 00:03:33.914 14:17:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:33.914 14:17:14 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:33.914 14:17:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:33.914 14:17:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:33.914 14:17:14 -- common/autotest_common.sh@10 -- # set +x 00:03:33.914 ************************************ 00:03:33.914 START TEST env 00:03:33.914 ************************************ 00:03:33.914 14:17:14 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:34.176 * Looking for test storage... 00:03:34.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:34.176 14:17:14 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:34.176 14:17:14 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:34.176 14:17:14 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:34.176 14:17:14 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:34.176 14:17:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:34.176 14:17:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:34.176 14:17:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:34.176 14:17:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:34.176 14:17:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:34.176 14:17:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:34.176 14:17:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:34.176 14:17:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:34.176 14:17:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:34.176 14:17:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:34.176 14:17:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:34.176 14:17:14 env -- scripts/common.sh@344 -- # case "$op" in 00:03:34.176 14:17:14 env -- scripts/common.sh@345 -- # : 1 00:03:34.176 14:17:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:34.176 14:17:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:34.176 14:17:14 env -- scripts/common.sh@365 -- # decimal 1 00:03:34.176 14:17:14 env -- scripts/common.sh@353 -- # local d=1 00:03:34.176 14:17:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:34.176 14:17:14 env -- scripts/common.sh@355 -- # echo 1 00:03:34.176 14:17:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:34.176 14:17:14 env -- scripts/common.sh@366 -- # decimal 2 00:03:34.176 14:17:14 env -- scripts/common.sh@353 -- # local d=2 00:03:34.176 14:17:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:34.176 14:17:14 env -- scripts/common.sh@355 -- # echo 2 00:03:34.176 14:17:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:34.176 14:17:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:34.176 14:17:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:34.176 14:17:14 env -- scripts/common.sh@368 -- # return 0 00:03:34.176 14:17:14 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:34.176 14:17:14 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:34.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.176 --rc genhtml_branch_coverage=1 00:03:34.176 --rc genhtml_function_coverage=1 00:03:34.176 --rc genhtml_legend=1 00:03:34.176 --rc geninfo_all_blocks=1 00:03:34.176 --rc geninfo_unexecuted_blocks=1 00:03:34.176 00:03:34.176 ' 00:03:34.176 14:17:14 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:34.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.176 --rc genhtml_branch_coverage=1 00:03:34.176 --rc genhtml_function_coverage=1 00:03:34.176 --rc genhtml_legend=1 00:03:34.176 --rc geninfo_all_blocks=1 00:03:34.176 --rc geninfo_unexecuted_blocks=1 00:03:34.176 00:03:34.176 ' 00:03:34.176 14:17:14 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:34.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.176 --rc genhtml_branch_coverage=1 00:03:34.176 --rc genhtml_function_coverage=1 00:03:34.176 --rc genhtml_legend=1 00:03:34.176 --rc geninfo_all_blocks=1 00:03:34.176 --rc geninfo_unexecuted_blocks=1 00:03:34.176 00:03:34.176 ' 00:03:34.176 14:17:14 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:34.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:34.176 --rc genhtml_branch_coverage=1 00:03:34.176 --rc genhtml_function_coverage=1 00:03:34.176 --rc genhtml_legend=1 00:03:34.176 --rc geninfo_all_blocks=1 00:03:34.176 --rc geninfo_unexecuted_blocks=1 00:03:34.176 00:03:34.176 ' 00:03:34.176 14:17:14 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:34.176 14:17:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:34.176 14:17:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:34.176 14:17:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.176 ************************************ 00:03:34.176 START TEST env_memory 00:03:34.176 ************************************ 00:03:34.176 14:17:14 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:34.176 00:03:34.176 00:03:34.176 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.176 http://cunit.sourceforge.net/ 00:03:34.176 00:03:34.176 00:03:34.176 Suite: memory 00:03:34.176 Test: alloc and free memory map ...[2024-10-14 14:17:14.856430] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:34.176 passed 00:03:34.176 Test: mem map translation ...[2024-10-14 14:17:14.882068] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:34.176 [2024-10-14 14:17:14.882097] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:34.176 [2024-10-14 14:17:14.882144] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:34.176 [2024-10-14 14:17:14.882152] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:34.437 passed 00:03:34.437 Test: mem map registration ...[2024-10-14 14:17:14.937459] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:34.437 [2024-10-14 14:17:14.937481] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:34.437 passed 00:03:34.437 Test: mem map adjacent registrations ...passed 00:03:34.437 00:03:34.437 Run Summary: Type Total Ran Passed Failed Inactive 00:03:34.437 suites 1 1 n/a 0 0 00:03:34.437 tests 4 4 4 0 0 00:03:34.437 asserts 152 152 152 0 n/a 00:03:34.437 00:03:34.437 Elapsed time = 0.193 seconds 00:03:34.437 00:03:34.437 real 0m0.208s 00:03:34.437 user 0m0.200s 00:03:34.437 sys 0m0.007s 00:03:34.437 14:17:15 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:34.438 14:17:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:34.438 ************************************ 00:03:34.438 END TEST env_memory 00:03:34.438 ************************************ 00:03:34.438 14:17:15 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:34.438 14:17:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:34.438 14:17:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:34.438 14:17:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:34.438 ************************************ 00:03:34.438 START TEST env_vtophys 00:03:34.438 ************************************ 00:03:34.438 14:17:15 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:34.438 EAL: lib.eal log level changed from notice to debug 00:03:34.438 EAL: Detected lcore 0 as core 0 on socket 0 00:03:34.438 EAL: Detected lcore 1 as core 1 on socket 0 00:03:34.438 EAL: Detected lcore 2 as core 2 on socket 0 00:03:34.438 EAL: Detected lcore 3 as core 3 on socket 0 00:03:34.438 EAL: Detected lcore 4 as core 4 on socket 0 00:03:34.438 EAL: Detected lcore 5 as core 5 on socket 0 00:03:34.438 EAL: Detected lcore 6 as core 6 on socket 0 00:03:34.438 EAL: Detected lcore 7 as core 7 on socket 0 00:03:34.438 EAL: Detected lcore 8 as core 8 on socket 0 00:03:34.438 EAL: Detected lcore 9 as core 9 on socket 0 00:03:34.438 EAL: Detected lcore 10 as core 10 on socket 0 00:03:34.438 EAL: Detected lcore 11 as core 11 on socket 0 00:03:34.438 EAL: Detected lcore 12 as core 12 on socket 0 00:03:34.438 EAL: Detected lcore 13 as core 13 on socket 0 00:03:34.438 EAL: Detected lcore 14 as core 14 on socket 0 00:03:34.438 EAL: Detected lcore 15 as core 15 on socket 0 00:03:34.438 EAL: Detected lcore 16 as core 16 on socket 0 00:03:34.438 EAL: Detected lcore 17 as core 17 on socket 0 00:03:34.438 EAL: Detected lcore 18 as core 18 on socket 0 00:03:34.438 EAL: Detected lcore 19 as core 19 on socket 0 00:03:34.438 EAL: Detected lcore 20 as core 20 on socket 0 00:03:34.438 EAL: Detected lcore 21 as core 21 on socket 0 00:03:34.438 EAL: Detected lcore 22 as core 22 on socket 0 00:03:34.438 EAL: Detected lcore 23 as core 23 on socket 0 00:03:34.438 EAL: Detected lcore 24 as core 24 on socket 0 00:03:34.438 EAL: Detected lcore 25 as core 25 on socket 0 00:03:34.438 EAL: Detected lcore 26 as core 26 on socket 0 00:03:34.438 EAL: Detected lcore 27 as core 27 on socket 0 00:03:34.438 EAL: Detected lcore 28 as core 28 on socket 0 00:03:34.438 EAL: Detected lcore 29 as core 29 on socket 0 00:03:34.438 EAL: Detected lcore 30 as core 30 on socket 0 00:03:34.438 EAL: Detected lcore 31 as core 31 on socket 0 00:03:34.438 EAL: Detected lcore 32 as core 32 on socket 0 00:03:34.438 EAL: Detected lcore 33 as core 33 on socket 0 00:03:34.438 EAL: Detected lcore 34 as core 34 on socket 0 00:03:34.438 EAL: Detected lcore 35 as core 35 on socket 0 00:03:34.438 EAL: Detected lcore 36 as core 0 on socket 1 00:03:34.438 EAL: Detected lcore 37 as core 1 on socket 1 00:03:34.438 EAL: Detected lcore 38 as core 2 on socket 1 00:03:34.438 EAL: Detected lcore 39 as core 3 on socket 1 00:03:34.438 EAL: Detected lcore 40 as core 4 on socket 1 00:03:34.438 EAL: Detected lcore 41 as core 5 on socket 1 00:03:34.438 EAL: Detected lcore 42 as core 6 on socket 1 00:03:34.438 EAL: Detected lcore 43 as core 7 on socket 1 00:03:34.438 EAL: Detected lcore 44 as core 8 on socket 1 00:03:34.438 EAL: Detected lcore 45 as core 9 on socket 1 00:03:34.438 EAL: Detected lcore 46 as core 10 on socket 1 00:03:34.438 EAL: Detected lcore 47 as core 11 on socket 1 00:03:34.438 EAL: Detected lcore 48 as core 12 on socket 1 00:03:34.438 EAL: Detected lcore 49 as core 13 on socket 1 00:03:34.438 EAL: Detected lcore 50 as core 14 on socket 1 00:03:34.438 EAL: Detected lcore 51 as core 15 on socket 1 00:03:34.438 EAL: Detected lcore 52 as core 16 on socket 1 00:03:34.438 EAL: Detected lcore 53 as core 17 on socket 1 00:03:34.438 EAL: Detected lcore 54 as core 18 on socket 1 00:03:34.438 EAL: Detected lcore 55 as core 19 on socket 1 00:03:34.438 EAL: Detected lcore 56 as core 20 on socket 1 00:03:34.438 EAL: Detected lcore 57 as core 21 on socket 1 00:03:34.438 EAL: Detected lcore 58 as core 22 on socket 1 00:03:34.438 EAL: Detected lcore 59 as core 23 on socket 1 00:03:34.438 EAL: Detected lcore 60 as core 24 on socket 1 00:03:34.438 EAL: Detected lcore 61 as core 25 on socket 1 00:03:34.438 EAL: Detected lcore 62 as core 26 on socket 1 00:03:34.438 EAL: Detected lcore 63 as core 27 on socket 1 00:03:34.438 EAL: Detected lcore 64 as core 28 on socket 1 00:03:34.438 EAL: Detected lcore 65 as core 29 on socket 1 00:03:34.438 EAL: Detected lcore 66 as core 30 on socket 1 00:03:34.438 EAL: Detected lcore 67 as core 31 on socket 1 00:03:34.438 EAL: Detected lcore 68 as core 32 on socket 1 00:03:34.438 EAL: Detected lcore 69 as core 33 on socket 1 00:03:34.438 EAL: Detected lcore 70 as core 34 on socket 1 00:03:34.438 EAL: Detected lcore 71 as core 35 on socket 1 00:03:34.438 EAL: Detected lcore 72 as core 0 on socket 0 00:03:34.438 EAL: Detected lcore 73 as core 1 on socket 0 00:03:34.438 EAL: Detected lcore 74 as core 2 on socket 0 00:03:34.438 EAL: Detected lcore 75 as core 3 on socket 0 00:03:34.438 EAL: Detected lcore 76 as core 4 on socket 0 00:03:34.438 EAL: Detected lcore 77 as core 5 on socket 0 00:03:34.438 EAL: Detected lcore 78 as core 6 on socket 0 00:03:34.438 EAL: Detected lcore 79 as core 7 on socket 0 00:03:34.438 EAL: Detected lcore 80 as core 8 on socket 0 00:03:34.438 EAL: Detected lcore 81 as core 9 on socket 0 00:03:34.438 EAL: Detected lcore 82 as core 10 on socket 0 00:03:34.438 EAL: Detected lcore 83 as core 11 on socket 0 00:03:34.438 EAL: Detected lcore 84 as core 12 on socket 0 00:03:34.438 EAL: Detected lcore 85 as core 13 on socket 0 00:03:34.438 EAL: Detected lcore 86 as core 14 on socket 0 00:03:34.438 EAL: Detected lcore 87 as core 15 on socket 0 00:03:34.438 EAL: Detected lcore 88 as core 16 on socket 0 00:03:34.438 EAL: Detected lcore 89 as core 17 on socket 0 00:03:34.438 EAL: Detected lcore 90 as core 18 on socket 0 00:03:34.438 EAL: Detected lcore 91 as core 19 on socket 0 00:03:34.438 EAL: Detected lcore 92 as core 20 on socket 0 00:03:34.438 EAL: Detected lcore 93 as core 21 on socket 0 00:03:34.438 EAL: Detected lcore 94 as core 22 on socket 0 00:03:34.438 EAL: Detected lcore 95 as core 23 on socket 0 00:03:34.438 EAL: Detected lcore 96 as core 24 on socket 0 00:03:34.438 EAL: Detected lcore 97 as core 25 on socket 0 00:03:34.438 EAL: Detected lcore 98 as core 26 on socket 0 00:03:34.438 EAL: Detected lcore 99 as core 27 on socket 0 00:03:34.438 EAL: Detected lcore 100 as core 28 on socket 0 00:03:34.438 EAL: Detected lcore 101 as core 29 on socket 0 00:03:34.438 EAL: Detected lcore 102 as core 30 on socket 0 00:03:34.438 EAL: Detected lcore 103 as core 31 on socket 0 00:03:34.438 EAL: Detected lcore 104 as core 32 on socket 0 00:03:34.438 EAL: Detected lcore 105 as core 33 on socket 0 00:03:34.438 EAL: Detected lcore 106 as core 34 on socket 0 00:03:34.438 EAL: Detected lcore 107 as core 35 on socket 0 00:03:34.438 EAL: Detected lcore 108 as core 0 on socket 1 00:03:34.438 EAL: Detected lcore 109 as core 1 on socket 1 00:03:34.438 EAL: Detected lcore 110 as core 2 on socket 1 00:03:34.438 EAL: Detected lcore 111 as core 3 on socket 1 00:03:34.438 EAL: Detected lcore 112 as core 4 on socket 1 00:03:34.438 EAL: Detected lcore 113 as core 5 on socket 1 00:03:34.438 EAL: Detected lcore 114 as core 6 on socket 1 00:03:34.438 EAL: Detected lcore 115 as core 7 on socket 1 00:03:34.438 EAL: Detected lcore 116 as core 8 on socket 1 00:03:34.438 EAL: Detected lcore 117 as core 9 on socket 1 00:03:34.438 EAL: Detected lcore 118 as core 10 on socket 1 00:03:34.438 EAL: Detected lcore 119 as core 11 on socket 1 00:03:34.438 EAL: Detected lcore 120 as core 12 on socket 1 00:03:34.438 EAL: Detected lcore 121 as core 13 on socket 1 00:03:34.438 EAL: Detected lcore 122 as core 14 on socket 1 00:03:34.438 EAL: Detected lcore 123 as core 15 on socket 1 00:03:34.438 EAL: Detected lcore 124 as core 16 on socket 1 00:03:34.438 EAL: Detected lcore 125 as core 17 on socket 1 00:03:34.438 EAL: Detected lcore 126 as core 18 on socket 1 00:03:34.438 EAL: Detected lcore 127 as core 19 on socket 1 00:03:34.438 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:34.438 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:34.438 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:34.438 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:34.438 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:34.438 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:34.438 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:34.438 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:34.438 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:34.438 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:34.438 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:34.438 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:34.438 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:34.438 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:34.438 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:34.438 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:34.438 EAL: Maximum logical cores by configuration: 128 00:03:34.438 EAL: Detected CPU lcores: 128 00:03:34.438 EAL: Detected NUMA nodes: 2 00:03:34.438 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:34.439 EAL: Detected shared linkage of DPDK 00:03:34.439 EAL: No shared files mode enabled, IPC will be disabled 00:03:34.439 EAL: Bus pci wants IOVA as 'DC' 00:03:34.439 EAL: Buses did not request a specific IOVA mode. 00:03:34.439 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:34.439 EAL: Selected IOVA mode 'VA' 00:03:34.439 EAL: Probing VFIO support... 00:03:34.439 EAL: IOMMU type 1 (Type 1) is supported 00:03:34.439 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:34.439 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:34.439 EAL: VFIO support initialized 00:03:34.439 EAL: Ask a virtual area of 0x2e000 bytes 00:03:34.439 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:34.439 EAL: Setting up physically contiguous memory... 00:03:34.439 EAL: Setting maximum number of open files to 524288 00:03:34.439 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:34.439 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:34.439 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:34.439 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.439 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:34.439 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.439 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.439 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:34.439 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:34.439 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.439 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:34.439 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.439 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.439 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:34.439 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:34.439 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.439 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:34.439 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.439 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.439 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:34.439 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:34.439 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.439 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:34.439 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.439 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.439 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:34.439 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:34.439 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:34.439 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.439 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:34.439 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.439 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.439 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:34.439 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:34.439 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.439 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:34.439 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.439 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.439 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:34.439 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:34.439 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.439 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:34.439 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.439 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.439 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:34.439 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:34.439 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.439 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:34.439 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.439 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.439 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:34.439 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:34.439 EAL: Hugepages will be freed exactly as allocated. 00:03:34.439 EAL: No shared files mode enabled, IPC is disabled 00:03:34.439 EAL: No shared files mode enabled, IPC is disabled 00:03:34.439 EAL: TSC frequency is ~2400000 KHz 00:03:34.439 EAL: Main lcore 0 is ready (tid=7fb89986ba00;cpuset=[0]) 00:03:34.439 EAL: Trying to obtain current memory policy. 00:03:34.439 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.439 EAL: Restoring previous memory policy: 0 00:03:34.439 EAL: request: mp_malloc_sync 00:03:34.439 EAL: No shared files mode enabled, IPC is disabled 00:03:34.439 EAL: Heap on socket 0 was expanded by 2MB 00:03:34.439 EAL: No shared files mode enabled, IPC is disabled 00:03:34.439 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:34.439 EAL: Mem event callback 'spdk:(nil)' registered 00:03:34.700 00:03:34.700 00:03:34.700 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.700 http://cunit.sourceforge.net/ 00:03:34.700 00:03:34.700 00:03:34.700 Suite: components_suite 00:03:34.700 Test: vtophys_malloc_test ...passed 00:03:34.700 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:34.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.700 EAL: Restoring previous memory policy: 4 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was expanded by 4MB 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was shrunk by 4MB 00:03:34.700 EAL: Trying to obtain current memory policy. 00:03:34.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.700 EAL: Restoring previous memory policy: 4 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was expanded by 6MB 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was shrunk by 6MB 00:03:34.700 EAL: Trying to obtain current memory policy. 00:03:34.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.700 EAL: Restoring previous memory policy: 4 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was expanded by 10MB 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was shrunk by 10MB 00:03:34.700 EAL: Trying to obtain current memory policy. 00:03:34.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.700 EAL: Restoring previous memory policy: 4 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was expanded by 18MB 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was shrunk by 18MB 00:03:34.700 EAL: Trying to obtain current memory policy. 00:03:34.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.700 EAL: Restoring previous memory policy: 4 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was expanded by 34MB 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was shrunk by 34MB 00:03:34.700 EAL: Trying to obtain current memory policy. 00:03:34.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.700 EAL: Restoring previous memory policy: 4 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was expanded by 66MB 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was shrunk by 66MB 00:03:34.700 EAL: Trying to obtain current memory policy. 00:03:34.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.700 EAL: Restoring previous memory policy: 4 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was expanded by 130MB 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was shrunk by 130MB 00:03:34.700 EAL: Trying to obtain current memory policy. 00:03:34.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.700 EAL: Restoring previous memory policy: 4 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was expanded by 258MB 00:03:34.700 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.700 EAL: request: mp_malloc_sync 00:03:34.700 EAL: No shared files mode enabled, IPC is disabled 00:03:34.700 EAL: Heap on socket 0 was shrunk by 258MB 00:03:34.701 EAL: Trying to obtain current memory policy. 00:03:34.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.701 EAL: Restoring previous memory policy: 4 00:03:34.701 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.701 EAL: request: mp_malloc_sync 00:03:34.701 EAL: No shared files mode enabled, IPC is disabled 00:03:34.701 EAL: Heap on socket 0 was expanded by 514MB 00:03:34.961 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.961 EAL: request: mp_malloc_sync 00:03:34.961 EAL: No shared files mode enabled, IPC is disabled 00:03:34.961 EAL: Heap on socket 0 was shrunk by 514MB 00:03:34.961 EAL: Trying to obtain current memory policy. 00:03:34.961 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.961 EAL: Restoring previous memory policy: 4 00:03:34.961 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.961 EAL: request: mp_malloc_sync 00:03:34.961 EAL: No shared files mode enabled, IPC is disabled 00:03:34.961 EAL: Heap on socket 0 was expanded by 1026MB 00:03:35.221 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.221 EAL: request: mp_malloc_sync 00:03:35.221 EAL: No shared files mode enabled, IPC is disabled 00:03:35.221 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:35.221 passed 00:03:35.221 00:03:35.221 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.221 suites 1 1 n/a 0 0 00:03:35.221 tests 2 2 2 0 0 00:03:35.221 asserts 497 497 497 0 n/a 00:03:35.221 00:03:35.221 Elapsed time = 0.644 seconds 00:03:35.221 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.221 EAL: request: mp_malloc_sync 00:03:35.221 EAL: No shared files mode enabled, IPC is disabled 00:03:35.221 EAL: Heap on socket 0 was shrunk by 2MB 00:03:35.221 EAL: No shared files mode enabled, IPC is disabled 00:03:35.221 EAL: No shared files mode enabled, IPC is disabled 00:03:35.221 EAL: No shared files mode enabled, IPC is disabled 00:03:35.221 00:03:35.221 real 0m0.765s 00:03:35.221 user 0m0.407s 00:03:35.221 sys 0m0.328s 00:03:35.221 14:17:15 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:35.221 14:17:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:35.221 ************************************ 00:03:35.221 END TEST env_vtophys 00:03:35.221 ************************************ 00:03:35.221 14:17:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:35.221 14:17:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:35.221 14:17:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:35.221 14:17:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.221 ************************************ 00:03:35.221 START TEST env_pci 00:03:35.221 ************************************ 00:03:35.221 14:17:15 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:35.221 00:03:35.221 00:03:35.221 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.221 http://cunit.sourceforge.net/ 00:03:35.221 00:03:35.221 00:03:35.221 Suite: pci 00:03:35.221 Test: pci_hook ...[2024-10-14 14:17:15.946243] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3145267 has claimed it 00:03:35.482 EAL: Cannot find device (10000:00:01.0) 00:03:35.482 EAL: Failed to attach device on primary process 00:03:35.482 passed 00:03:35.482 00:03:35.482 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.482 suites 1 1 n/a 0 0 00:03:35.482 tests 1 1 1 0 0 00:03:35.482 asserts 25 25 25 0 n/a 00:03:35.482 00:03:35.482 Elapsed time = 0.032 seconds 00:03:35.482 00:03:35.482 real 0m0.054s 00:03:35.482 user 0m0.013s 00:03:35.482 sys 0m0.041s 00:03:35.482 14:17:15 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:35.482 14:17:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:35.482 ************************************ 00:03:35.482 END TEST env_pci 00:03:35.482 ************************************ 00:03:35.482 14:17:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:35.482 14:17:16 env -- env/env.sh@15 -- # uname 00:03:35.482 14:17:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:35.482 14:17:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:35.482 14:17:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.482 14:17:16 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:35.482 14:17:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:35.482 14:17:16 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.482 ************************************ 00:03:35.482 START TEST env_dpdk_post_init 00:03:35.482 ************************************ 00:03:35.482 14:17:16 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.482 EAL: Detected CPU lcores: 128 00:03:35.482 EAL: Detected NUMA nodes: 2 00:03:35.482 EAL: Detected shared linkage of DPDK 00:03:35.482 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:35.482 EAL: Selected IOVA mode 'VA' 00:03:35.482 EAL: VFIO support initialized 00:03:35.482 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:35.482 EAL: Using IOMMU type 1 (Type 1) 00:03:35.742 EAL: Ignore mapping IO port bar(1) 00:03:35.742 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:36.003 EAL: Ignore mapping IO port bar(1) 00:03:36.003 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:36.264 EAL: Ignore mapping IO port bar(1) 00:03:36.264 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:36.264 EAL: Ignore mapping IO port bar(1) 00:03:36.524 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:36.524 EAL: Ignore mapping IO port bar(1) 00:03:36.784 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:36.784 EAL: Ignore mapping IO port bar(1) 00:03:37.045 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:37.045 EAL: Ignore mapping IO port bar(1) 00:03:37.045 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:37.305 EAL: Ignore mapping IO port bar(1) 00:03:37.305 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:37.565 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:37.825 EAL: Ignore mapping IO port bar(1) 00:03:37.825 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:38.085 EAL: Ignore mapping IO port bar(1) 00:03:38.085 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:38.085 EAL: Ignore mapping IO port bar(1) 00:03:38.346 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:38.346 EAL: Ignore mapping IO port bar(1) 00:03:38.606 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:38.606 EAL: Ignore mapping IO port bar(1) 00:03:38.606 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:38.867 EAL: Ignore mapping IO port bar(1) 00:03:38.867 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:39.126 EAL: Ignore mapping IO port bar(1) 00:03:39.126 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:39.386 EAL: Ignore mapping IO port bar(1) 00:03:39.386 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:39.386 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:39.386 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:39.646 Starting DPDK initialization... 00:03:39.646 Starting SPDK post initialization... 00:03:39.646 SPDK NVMe probe 00:03:39.646 Attaching to 0000:65:00.0 00:03:39.646 Attached to 0000:65:00.0 00:03:39.646 Cleaning up... 00:03:41.555 00:03:41.555 real 0m5.728s 00:03:41.555 user 0m0.084s 00:03:41.555 sys 0m0.186s 00:03:41.555 14:17:21 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.555 14:17:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:41.555 ************************************ 00:03:41.555 END TEST env_dpdk_post_init 00:03:41.555 ************************************ 00:03:41.555 14:17:21 env -- env/env.sh@26 -- # uname 00:03:41.555 14:17:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:41.555 14:17:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:41.555 14:17:21 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.555 14:17:21 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.555 14:17:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.555 ************************************ 00:03:41.555 START TEST env_mem_callbacks 00:03:41.555 ************************************ 00:03:41.555 14:17:21 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:41.555 EAL: Detected CPU lcores: 128 00:03:41.555 EAL: Detected NUMA nodes: 2 00:03:41.555 EAL: Detected shared linkage of DPDK 00:03:41.555 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:41.555 EAL: Selected IOVA mode 'VA' 00:03:41.555 EAL: VFIO support initialized 00:03:41.555 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:41.555 00:03:41.555 00:03:41.555 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.555 http://cunit.sourceforge.net/ 00:03:41.555 00:03:41.555 00:03:41.555 Suite: memory 00:03:41.555 Test: test ... 00:03:41.555 register 0x200000200000 2097152 00:03:41.555 malloc 3145728 00:03:41.555 register 0x200000400000 4194304 00:03:41.555 buf 0x200000500000 len 3145728 PASSED 00:03:41.555 malloc 64 00:03:41.555 buf 0x2000004fff40 len 64 PASSED 00:03:41.555 malloc 4194304 00:03:41.555 register 0x200000800000 6291456 00:03:41.555 buf 0x200000a00000 len 4194304 PASSED 00:03:41.555 free 0x200000500000 3145728 00:03:41.555 free 0x2000004fff40 64 00:03:41.555 unregister 0x200000400000 4194304 PASSED 00:03:41.555 free 0x200000a00000 4194304 00:03:41.555 unregister 0x200000800000 6291456 PASSED 00:03:41.555 malloc 8388608 00:03:41.555 register 0x200000400000 10485760 00:03:41.555 buf 0x200000600000 len 8388608 PASSED 00:03:41.555 free 0x200000600000 8388608 00:03:41.555 unregister 0x200000400000 10485760 PASSED 00:03:41.555 passed 00:03:41.555 00:03:41.555 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.555 suites 1 1 n/a 0 0 00:03:41.555 tests 1 1 1 0 0 00:03:41.555 asserts 15 15 15 0 n/a 00:03:41.555 00:03:41.555 Elapsed time = 0.004 seconds 00:03:41.555 00:03:41.555 real 0m0.060s 00:03:41.555 user 0m0.025s 00:03:41.555 sys 0m0.036s 00:03:41.555 14:17:21 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.555 14:17:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:41.555 ************************************ 00:03:41.555 END TEST env_mem_callbacks 00:03:41.556 ************************************ 00:03:41.556 00:03:41.556 real 0m7.423s 00:03:41.556 user 0m1.002s 00:03:41.556 sys 0m0.964s 00:03:41.556 14:17:21 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:41.556 14:17:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.556 ************************************ 00:03:41.556 END TEST env 00:03:41.556 ************************************ 00:03:41.556 14:17:22 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:41.556 14:17:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:41.556 14:17:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:41.556 14:17:22 -- common/autotest_common.sh@10 -- # set +x 00:03:41.556 ************************************ 00:03:41.556 START TEST rpc 00:03:41.556 ************************************ 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:41.556 * Looking for test storage... 00:03:41.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:41.556 14:17:22 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:41.556 14:17:22 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:41.556 14:17:22 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:41.556 14:17:22 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:41.556 14:17:22 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:41.556 14:17:22 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:41.556 14:17:22 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:41.556 14:17:22 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:41.556 14:17:22 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:41.556 14:17:22 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:41.556 14:17:22 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:41.556 14:17:22 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:41.556 14:17:22 rpc -- scripts/common.sh@345 -- # : 1 00:03:41.556 14:17:22 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:41.556 14:17:22 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:41.556 14:17:22 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:41.556 14:17:22 rpc -- scripts/common.sh@353 -- # local d=1 00:03:41.556 14:17:22 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:41.556 14:17:22 rpc -- scripts/common.sh@355 -- # echo 1 00:03:41.556 14:17:22 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:41.556 14:17:22 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:41.556 14:17:22 rpc -- scripts/common.sh@353 -- # local d=2 00:03:41.556 14:17:22 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:41.556 14:17:22 rpc -- scripts/common.sh@355 -- # echo 2 00:03:41.556 14:17:22 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:41.556 14:17:22 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:41.556 14:17:22 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:41.556 14:17:22 rpc -- scripts/common.sh@368 -- # return 0 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.556 --rc genhtml_branch_coverage=1 00:03:41.556 --rc genhtml_function_coverage=1 00:03:41.556 --rc genhtml_legend=1 00:03:41.556 --rc geninfo_all_blocks=1 00:03:41.556 --rc geninfo_unexecuted_blocks=1 00:03:41.556 00:03:41.556 ' 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.556 --rc genhtml_branch_coverage=1 00:03:41.556 --rc genhtml_function_coverage=1 00:03:41.556 --rc genhtml_legend=1 00:03:41.556 --rc geninfo_all_blocks=1 00:03:41.556 --rc geninfo_unexecuted_blocks=1 00:03:41.556 00:03:41.556 ' 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.556 --rc genhtml_branch_coverage=1 00:03:41.556 --rc genhtml_function_coverage=1 00:03:41.556 --rc genhtml_legend=1 00:03:41.556 --rc geninfo_all_blocks=1 00:03:41.556 --rc geninfo_unexecuted_blocks=1 00:03:41.556 00:03:41.556 ' 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:41.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.556 --rc genhtml_branch_coverage=1 00:03:41.556 --rc genhtml_function_coverage=1 00:03:41.556 --rc genhtml_legend=1 00:03:41.556 --rc geninfo_all_blocks=1 00:03:41.556 --rc geninfo_unexecuted_blocks=1 00:03:41.556 00:03:41.556 ' 00:03:41.556 14:17:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3146706 00:03:41.556 14:17:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:41.556 14:17:22 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:41.556 14:17:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3146706 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@831 -- # '[' -z 3146706 ']' 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:41.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:41.556 14:17:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:41.816 [2024-10-14 14:17:22.317711] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:03:41.816 [2024-10-14 14:17:22.317766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3146706 ] 00:03:41.816 [2024-10-14 14:17:22.379146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:41.816 [2024-10-14 14:17:22.414289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:41.816 [2024-10-14 14:17:22.414321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3146706' to capture a snapshot of events at runtime. 00:03:41.816 [2024-10-14 14:17:22.414329] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:41.816 [2024-10-14 14:17:22.414337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:41.816 [2024-10-14 14:17:22.414343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3146706 for offline analysis/debug. 00:03:41.816 [2024-10-14 14:17:22.414933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.077 14:17:22 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:42.077 14:17:22 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:42.077 14:17:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:42.077 14:17:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:42.077 14:17:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:42.077 14:17:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:42.077 14:17:22 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.077 14:17:22 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.077 14:17:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.077 ************************************ 00:03:42.077 START TEST rpc_integrity 00:03:42.077 ************************************ 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:42.077 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.077 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:42.077 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:42.077 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:42.077 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.077 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:42.077 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.077 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:42.077 { 00:03:42.077 "name": "Malloc0", 00:03:42.077 "aliases": [ 00:03:42.077 "1df1c62f-d2c1-4899-a9c8-fa0fb1b38b4d" 00:03:42.077 ], 00:03:42.077 "product_name": "Malloc disk", 00:03:42.077 "block_size": 512, 00:03:42.077 "num_blocks": 16384, 00:03:42.077 "uuid": "1df1c62f-d2c1-4899-a9c8-fa0fb1b38b4d", 00:03:42.077 "assigned_rate_limits": { 00:03:42.077 "rw_ios_per_sec": 0, 00:03:42.077 "rw_mbytes_per_sec": 0, 00:03:42.077 "r_mbytes_per_sec": 0, 00:03:42.077 "w_mbytes_per_sec": 0 00:03:42.077 }, 00:03:42.077 "claimed": false, 00:03:42.077 "zoned": false, 00:03:42.077 "supported_io_types": { 00:03:42.077 "read": true, 00:03:42.077 "write": true, 00:03:42.077 "unmap": true, 00:03:42.077 "flush": true, 00:03:42.077 "reset": true, 00:03:42.077 "nvme_admin": false, 00:03:42.077 "nvme_io": false, 00:03:42.077 "nvme_io_md": false, 00:03:42.077 "write_zeroes": true, 00:03:42.077 "zcopy": true, 00:03:42.077 "get_zone_info": false, 00:03:42.077 "zone_management": false, 00:03:42.077 "zone_append": false, 00:03:42.077 "compare": false, 00:03:42.077 "compare_and_write": false, 00:03:42.077 "abort": true, 00:03:42.077 "seek_hole": false, 00:03:42.077 "seek_data": false, 00:03:42.077 "copy": true, 00:03:42.077 "nvme_iov_md": false 00:03:42.077 }, 00:03:42.077 "memory_domains": [ 00:03:42.077 { 00:03:42.077 "dma_device_id": "system", 00:03:42.077 "dma_device_type": 1 00:03:42.077 }, 00:03:42.077 { 00:03:42.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.077 "dma_device_type": 2 00:03:42.077 } 00:03:42.077 ], 00:03:42.077 "driver_specific": {} 00:03:42.077 } 00:03:42.077 ]' 00:03:42.077 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:42.077 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:42.077 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.077 [2024-10-14 14:17:22.782735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:42.077 [2024-10-14 14:17:22.782767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:42.077 [2024-10-14 14:17:22.782780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xde7c40 00:03:42.077 [2024-10-14 14:17:22.782787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:42.077 [2024-10-14 14:17:22.784160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:42.077 [2024-10-14 14:17:22.784182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:42.077 Passthru0 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.077 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.077 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.339 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.339 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:42.339 { 00:03:42.339 "name": "Malloc0", 00:03:42.339 "aliases": [ 00:03:42.339 "1df1c62f-d2c1-4899-a9c8-fa0fb1b38b4d" 00:03:42.339 ], 00:03:42.339 "product_name": "Malloc disk", 00:03:42.339 "block_size": 512, 00:03:42.339 "num_blocks": 16384, 00:03:42.339 "uuid": "1df1c62f-d2c1-4899-a9c8-fa0fb1b38b4d", 00:03:42.339 "assigned_rate_limits": { 00:03:42.339 "rw_ios_per_sec": 0, 00:03:42.339 "rw_mbytes_per_sec": 0, 00:03:42.339 "r_mbytes_per_sec": 0, 00:03:42.339 "w_mbytes_per_sec": 0 00:03:42.339 }, 00:03:42.339 "claimed": true, 00:03:42.339 "claim_type": "exclusive_write", 00:03:42.339 "zoned": false, 00:03:42.339 "supported_io_types": { 00:03:42.339 "read": true, 00:03:42.339 "write": true, 00:03:42.339 "unmap": true, 00:03:42.339 "flush": true, 00:03:42.339 "reset": true, 00:03:42.339 "nvme_admin": false, 00:03:42.339 "nvme_io": false, 00:03:42.339 "nvme_io_md": false, 00:03:42.339 "write_zeroes": true, 00:03:42.339 "zcopy": true, 00:03:42.339 "get_zone_info": false, 00:03:42.339 "zone_management": false, 00:03:42.339 "zone_append": false, 00:03:42.339 "compare": false, 00:03:42.339 "compare_and_write": false, 00:03:42.339 "abort": true, 00:03:42.339 "seek_hole": false, 00:03:42.339 "seek_data": false, 00:03:42.339 "copy": true, 00:03:42.339 "nvme_iov_md": false 00:03:42.339 }, 00:03:42.339 "memory_domains": [ 00:03:42.339 { 00:03:42.339 "dma_device_id": "system", 00:03:42.339 "dma_device_type": 1 00:03:42.339 }, 00:03:42.339 { 00:03:42.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.339 "dma_device_type": 2 00:03:42.339 } 00:03:42.339 ], 00:03:42.339 "driver_specific": {} 00:03:42.339 }, 00:03:42.339 { 00:03:42.339 "name": "Passthru0", 00:03:42.339 "aliases": [ 00:03:42.339 "6c702322-75b1-5c55-b9f8-c592e1e6d426" 00:03:42.339 ], 00:03:42.339 "product_name": "passthru", 00:03:42.339 "block_size": 512, 00:03:42.339 "num_blocks": 16384, 00:03:42.339 "uuid": "6c702322-75b1-5c55-b9f8-c592e1e6d426", 00:03:42.339 "assigned_rate_limits": { 00:03:42.339 "rw_ios_per_sec": 0, 00:03:42.339 "rw_mbytes_per_sec": 0, 00:03:42.339 "r_mbytes_per_sec": 0, 00:03:42.339 "w_mbytes_per_sec": 0 00:03:42.339 }, 00:03:42.339 "claimed": false, 00:03:42.339 "zoned": false, 00:03:42.339 "supported_io_types": { 00:03:42.339 "read": true, 00:03:42.339 "write": true, 00:03:42.339 "unmap": true, 00:03:42.339 "flush": true, 00:03:42.339 "reset": true, 00:03:42.339 "nvme_admin": false, 00:03:42.339 "nvme_io": false, 00:03:42.339 "nvme_io_md": false, 00:03:42.339 "write_zeroes": true, 00:03:42.339 "zcopy": true, 00:03:42.339 "get_zone_info": false, 00:03:42.339 "zone_management": false, 00:03:42.339 "zone_append": false, 00:03:42.339 "compare": false, 00:03:42.339 "compare_and_write": false, 00:03:42.339 "abort": true, 00:03:42.339 "seek_hole": false, 00:03:42.339 "seek_data": false, 00:03:42.339 "copy": true, 00:03:42.339 "nvme_iov_md": false 00:03:42.339 }, 00:03:42.339 "memory_domains": [ 00:03:42.339 { 00:03:42.339 "dma_device_id": "system", 00:03:42.339 "dma_device_type": 1 00:03:42.339 }, 00:03:42.339 { 00:03:42.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.339 "dma_device_type": 2 00:03:42.339 } 00:03:42.339 ], 00:03:42.339 "driver_specific": { 00:03:42.339 "passthru": { 00:03:42.339 "name": "Passthru0", 00:03:42.339 "base_bdev_name": "Malloc0" 00:03:42.339 } 00:03:42.339 } 00:03:42.339 } 00:03:42.339 ]' 00:03:42.339 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:42.339 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:42.339 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:42.339 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.339 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.339 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.339 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:42.339 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.339 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.339 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.339 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:42.339 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.339 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.339 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.339 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:42.339 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:42.339 14:17:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:42.339 00:03:42.339 real 0m0.295s 00:03:42.339 user 0m0.186s 00:03:42.339 sys 0m0.042s 00:03:42.339 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.339 14:17:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.339 ************************************ 00:03:42.339 END TEST rpc_integrity 00:03:42.339 ************************************ 00:03:42.339 14:17:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:42.339 14:17:22 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.339 14:17:22 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.339 14:17:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.339 ************************************ 00:03:42.339 START TEST rpc_plugins 00:03:42.339 ************************************ 00:03:42.339 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:03:42.339 14:17:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:42.339 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.339 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.339 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.339 14:17:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:42.339 14:17:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:42.339 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.339 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.339 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.339 14:17:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:42.339 { 00:03:42.339 "name": "Malloc1", 00:03:42.339 "aliases": [ 00:03:42.339 "bbee3b82-2c95-4545-9ba1-a0f2e719f4c9" 00:03:42.339 ], 00:03:42.339 "product_name": "Malloc disk", 00:03:42.339 "block_size": 4096, 00:03:42.339 "num_blocks": 256, 00:03:42.339 "uuid": "bbee3b82-2c95-4545-9ba1-a0f2e719f4c9", 00:03:42.339 "assigned_rate_limits": { 00:03:42.339 "rw_ios_per_sec": 0, 00:03:42.339 "rw_mbytes_per_sec": 0, 00:03:42.339 "r_mbytes_per_sec": 0, 00:03:42.339 "w_mbytes_per_sec": 0 00:03:42.339 }, 00:03:42.339 "claimed": false, 00:03:42.339 "zoned": false, 00:03:42.339 "supported_io_types": { 00:03:42.339 "read": true, 00:03:42.339 "write": true, 00:03:42.339 "unmap": true, 00:03:42.339 "flush": true, 00:03:42.339 "reset": true, 00:03:42.339 "nvme_admin": false, 00:03:42.339 "nvme_io": false, 00:03:42.339 "nvme_io_md": false, 00:03:42.339 "write_zeroes": true, 00:03:42.339 "zcopy": true, 00:03:42.339 "get_zone_info": false, 00:03:42.339 "zone_management": false, 00:03:42.339 "zone_append": false, 00:03:42.339 "compare": false, 00:03:42.339 "compare_and_write": false, 00:03:42.339 "abort": true, 00:03:42.339 "seek_hole": false, 00:03:42.339 "seek_data": false, 00:03:42.339 "copy": true, 00:03:42.339 "nvme_iov_md": false 00:03:42.339 }, 00:03:42.339 "memory_domains": [ 00:03:42.339 { 00:03:42.339 "dma_device_id": "system", 00:03:42.339 "dma_device_type": 1 00:03:42.339 }, 00:03:42.339 { 00:03:42.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.339 "dma_device_type": 2 00:03:42.339 } 00:03:42.339 ], 00:03:42.339 "driver_specific": {} 00:03:42.339 } 00:03:42.339 ]' 00:03:42.339 14:17:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:42.600 14:17:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:42.600 14:17:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:42.600 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.600 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.600 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.600 14:17:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:42.600 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.600 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.600 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.600 14:17:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:42.600 14:17:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:42.600 14:17:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:42.600 00:03:42.600 real 0m0.149s 00:03:42.600 user 0m0.096s 00:03:42.600 sys 0m0.018s 00:03:42.600 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.600 14:17:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.600 ************************************ 00:03:42.600 END TEST rpc_plugins 00:03:42.600 ************************************ 00:03:42.600 14:17:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:42.600 14:17:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.600 14:17:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.600 14:17:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.600 ************************************ 00:03:42.600 START TEST rpc_trace_cmd_test 00:03:42.600 ************************************ 00:03:42.600 14:17:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:03:42.600 14:17:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:42.600 14:17:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:42.600 14:17:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.600 14:17:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:42.600 14:17:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.600 14:17:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:42.600 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3146706", 00:03:42.600 "tpoint_group_mask": "0x8", 00:03:42.600 "iscsi_conn": { 00:03:42.600 "mask": "0x2", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "scsi": { 00:03:42.600 "mask": "0x4", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "bdev": { 00:03:42.600 "mask": "0x8", 00:03:42.600 "tpoint_mask": "0xffffffffffffffff" 00:03:42.600 }, 00:03:42.600 "nvmf_rdma": { 00:03:42.600 "mask": "0x10", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "nvmf_tcp": { 00:03:42.600 "mask": "0x20", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "ftl": { 00:03:42.600 "mask": "0x40", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "blobfs": { 00:03:42.600 "mask": "0x80", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "dsa": { 00:03:42.600 "mask": "0x200", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "thread": { 00:03:42.600 "mask": "0x400", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "nvme_pcie": { 00:03:42.600 "mask": "0x800", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "iaa": { 00:03:42.600 "mask": "0x1000", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "nvme_tcp": { 00:03:42.600 "mask": "0x2000", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "bdev_nvme": { 00:03:42.600 "mask": "0x4000", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "sock": { 00:03:42.600 "mask": "0x8000", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "blob": { 00:03:42.600 "mask": "0x10000", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "bdev_raid": { 00:03:42.600 "mask": "0x20000", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 }, 00:03:42.600 "scheduler": { 00:03:42.600 "mask": "0x40000", 00:03:42.600 "tpoint_mask": "0x0" 00:03:42.600 } 00:03:42.600 }' 00:03:42.600 14:17:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:42.600 14:17:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:42.600 14:17:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:42.860 14:17:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:42.860 14:17:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:42.860 14:17:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:42.860 14:17:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:42.860 14:17:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:42.860 14:17:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:42.860 14:17:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:42.860 00:03:42.860 real 0m0.224s 00:03:42.860 user 0m0.189s 00:03:42.860 sys 0m0.027s 00:03:42.861 14:17:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:42.861 14:17:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:42.861 ************************************ 00:03:42.861 END TEST rpc_trace_cmd_test 00:03:42.861 ************************************ 00:03:42.861 14:17:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:42.861 14:17:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:42.861 14:17:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:42.861 14:17:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:42.861 14:17:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:42.861 14:17:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.861 ************************************ 00:03:42.861 START TEST rpc_daemon_integrity 00:03:42.861 ************************************ 00:03:42.861 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:42.861 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:42.861 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:42.861 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.861 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:42.861 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:42.861 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:43.121 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:43.121 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:43.121 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.121 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.121 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.121 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:43.121 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:43.121 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.121 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.121 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.121 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:43.121 { 00:03:43.121 "name": "Malloc2", 00:03:43.121 "aliases": [ 00:03:43.121 "db45c86d-1ab3-41ab-a1eb-a49406ff398b" 00:03:43.121 ], 00:03:43.121 "product_name": "Malloc disk", 00:03:43.121 "block_size": 512, 00:03:43.121 "num_blocks": 16384, 00:03:43.121 "uuid": "db45c86d-1ab3-41ab-a1eb-a49406ff398b", 00:03:43.121 "assigned_rate_limits": { 00:03:43.121 "rw_ios_per_sec": 0, 00:03:43.121 "rw_mbytes_per_sec": 0, 00:03:43.121 "r_mbytes_per_sec": 0, 00:03:43.121 "w_mbytes_per_sec": 0 00:03:43.121 }, 00:03:43.121 "claimed": false, 00:03:43.121 "zoned": false, 00:03:43.121 "supported_io_types": { 00:03:43.121 "read": true, 00:03:43.121 "write": true, 00:03:43.121 "unmap": true, 00:03:43.121 "flush": true, 00:03:43.121 "reset": true, 00:03:43.121 "nvme_admin": false, 00:03:43.121 "nvme_io": false, 00:03:43.121 "nvme_io_md": false, 00:03:43.121 "write_zeroes": true, 00:03:43.121 "zcopy": true, 00:03:43.121 "get_zone_info": false, 00:03:43.121 "zone_management": false, 00:03:43.121 "zone_append": false, 00:03:43.121 "compare": false, 00:03:43.121 "compare_and_write": false, 00:03:43.121 "abort": true, 00:03:43.121 "seek_hole": false, 00:03:43.121 "seek_data": false, 00:03:43.121 "copy": true, 00:03:43.121 "nvme_iov_md": false 00:03:43.121 }, 00:03:43.121 "memory_domains": [ 00:03:43.121 { 00:03:43.121 "dma_device_id": "system", 00:03:43.121 "dma_device_type": 1 00:03:43.121 }, 00:03:43.121 { 00:03:43.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.121 "dma_device_type": 2 00:03:43.122 } 00:03:43.122 ], 00:03:43.122 "driver_specific": {} 00:03:43.122 } 00:03:43.122 ]' 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.122 [2024-10-14 14:17:23.681155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:43.122 [2024-10-14 14:17:23.681184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:43.122 [2024-10-14 14:17:23.681196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe78bd0 00:03:43.122 [2024-10-14 14:17:23.681202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:43.122 [2024-10-14 14:17:23.682450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:43.122 [2024-10-14 14:17:23.682474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:43.122 Passthru0 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:43.122 { 00:03:43.122 "name": "Malloc2", 00:03:43.122 "aliases": [ 00:03:43.122 "db45c86d-1ab3-41ab-a1eb-a49406ff398b" 00:03:43.122 ], 00:03:43.122 "product_name": "Malloc disk", 00:03:43.122 "block_size": 512, 00:03:43.122 "num_blocks": 16384, 00:03:43.122 "uuid": "db45c86d-1ab3-41ab-a1eb-a49406ff398b", 00:03:43.122 "assigned_rate_limits": { 00:03:43.122 "rw_ios_per_sec": 0, 00:03:43.122 "rw_mbytes_per_sec": 0, 00:03:43.122 "r_mbytes_per_sec": 0, 00:03:43.122 "w_mbytes_per_sec": 0 00:03:43.122 }, 00:03:43.122 "claimed": true, 00:03:43.122 "claim_type": "exclusive_write", 00:03:43.122 "zoned": false, 00:03:43.122 "supported_io_types": { 00:03:43.122 "read": true, 00:03:43.122 "write": true, 00:03:43.122 "unmap": true, 00:03:43.122 "flush": true, 00:03:43.122 "reset": true, 00:03:43.122 "nvme_admin": false, 00:03:43.122 "nvme_io": false, 00:03:43.122 "nvme_io_md": false, 00:03:43.122 "write_zeroes": true, 00:03:43.122 "zcopy": true, 00:03:43.122 "get_zone_info": false, 00:03:43.122 "zone_management": false, 00:03:43.122 "zone_append": false, 00:03:43.122 "compare": false, 00:03:43.122 "compare_and_write": false, 00:03:43.122 "abort": true, 00:03:43.122 "seek_hole": false, 00:03:43.122 "seek_data": false, 00:03:43.122 "copy": true, 00:03:43.122 "nvme_iov_md": false 00:03:43.122 }, 00:03:43.122 "memory_domains": [ 00:03:43.122 { 00:03:43.122 "dma_device_id": "system", 00:03:43.122 "dma_device_type": 1 00:03:43.122 }, 00:03:43.122 { 00:03:43.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.122 "dma_device_type": 2 00:03:43.122 } 00:03:43.122 ], 00:03:43.122 "driver_specific": {} 00:03:43.122 }, 00:03:43.122 { 00:03:43.122 "name": "Passthru0", 00:03:43.122 "aliases": [ 00:03:43.122 "9fef94e2-d4f6-5582-b6ba-9cdc40f3acac" 00:03:43.122 ], 00:03:43.122 "product_name": "passthru", 00:03:43.122 "block_size": 512, 00:03:43.122 "num_blocks": 16384, 00:03:43.122 "uuid": "9fef94e2-d4f6-5582-b6ba-9cdc40f3acac", 00:03:43.122 "assigned_rate_limits": { 00:03:43.122 "rw_ios_per_sec": 0, 00:03:43.122 "rw_mbytes_per_sec": 0, 00:03:43.122 "r_mbytes_per_sec": 0, 00:03:43.122 "w_mbytes_per_sec": 0 00:03:43.122 }, 00:03:43.122 "claimed": false, 00:03:43.122 "zoned": false, 00:03:43.122 "supported_io_types": { 00:03:43.122 "read": true, 00:03:43.122 "write": true, 00:03:43.122 "unmap": true, 00:03:43.122 "flush": true, 00:03:43.122 "reset": true, 00:03:43.122 "nvme_admin": false, 00:03:43.122 "nvme_io": false, 00:03:43.122 "nvme_io_md": false, 00:03:43.122 "write_zeroes": true, 00:03:43.122 "zcopy": true, 00:03:43.122 "get_zone_info": false, 00:03:43.122 "zone_management": false, 00:03:43.122 "zone_append": false, 00:03:43.122 "compare": false, 00:03:43.122 "compare_and_write": false, 00:03:43.122 "abort": true, 00:03:43.122 "seek_hole": false, 00:03:43.122 "seek_data": false, 00:03:43.122 "copy": true, 00:03:43.122 "nvme_iov_md": false 00:03:43.122 }, 00:03:43.122 "memory_domains": [ 00:03:43.122 { 00:03:43.122 "dma_device_id": "system", 00:03:43.122 "dma_device_type": 1 00:03:43.122 }, 00:03:43.122 { 00:03:43.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.122 "dma_device_type": 2 00:03:43.122 } 00:03:43.122 ], 00:03:43.122 "driver_specific": { 00:03:43.122 "passthru": { 00:03:43.122 "name": "Passthru0", 00:03:43.122 "base_bdev_name": "Malloc2" 00:03:43.122 } 00:03:43.122 } 00:03:43.122 } 00:03:43.122 ]' 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:43.122 00:03:43.122 real 0m0.305s 00:03:43.122 user 0m0.184s 00:03:43.122 sys 0m0.050s 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.122 14:17:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.122 ************************************ 00:03:43.122 END TEST rpc_daemon_integrity 00:03:43.122 ************************************ 00:03:43.382 14:17:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:43.382 14:17:23 rpc -- rpc/rpc.sh@84 -- # killprocess 3146706 00:03:43.382 14:17:23 rpc -- common/autotest_common.sh@950 -- # '[' -z 3146706 ']' 00:03:43.382 14:17:23 rpc -- common/autotest_common.sh@954 -- # kill -0 3146706 00:03:43.382 14:17:23 rpc -- common/autotest_common.sh@955 -- # uname 00:03:43.382 14:17:23 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:43.382 14:17:23 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3146706 00:03:43.382 14:17:23 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:43.382 14:17:23 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:43.382 14:17:23 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3146706' 00:03:43.382 killing process with pid 3146706 00:03:43.382 14:17:23 rpc -- common/autotest_common.sh@969 -- # kill 3146706 00:03:43.382 14:17:23 rpc -- common/autotest_common.sh@974 -- # wait 3146706 00:03:43.643 00:03:43.643 real 0m2.088s 00:03:43.643 user 0m2.730s 00:03:43.643 sys 0m0.714s 00:03:43.643 14:17:24 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.643 14:17:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.643 ************************************ 00:03:43.643 END TEST rpc 00:03:43.643 ************************************ 00:03:43.643 14:17:24 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:43.643 14:17:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.643 14:17:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.643 14:17:24 -- common/autotest_common.sh@10 -- # set +x 00:03:43.643 ************************************ 00:03:43.643 START TEST skip_rpc 00:03:43.643 ************************************ 00:03:43.643 14:17:24 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:43.643 * Looking for test storage... 00:03:43.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:43.643 14:17:24 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:43.643 14:17:24 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:43.643 14:17:24 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:43.904 14:17:24 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.904 14:17:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:43.904 14:17:24 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.904 14:17:24 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:43.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.904 --rc genhtml_branch_coverage=1 00:03:43.904 --rc genhtml_function_coverage=1 00:03:43.904 --rc genhtml_legend=1 00:03:43.904 --rc geninfo_all_blocks=1 00:03:43.904 --rc geninfo_unexecuted_blocks=1 00:03:43.904 00:03:43.904 ' 00:03:43.904 14:17:24 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:43.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.904 --rc genhtml_branch_coverage=1 00:03:43.904 --rc genhtml_function_coverage=1 00:03:43.904 --rc genhtml_legend=1 00:03:43.904 --rc geninfo_all_blocks=1 00:03:43.904 --rc geninfo_unexecuted_blocks=1 00:03:43.904 00:03:43.904 ' 00:03:43.904 14:17:24 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:43.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.904 --rc genhtml_branch_coverage=1 00:03:43.904 --rc genhtml_function_coverage=1 00:03:43.904 --rc genhtml_legend=1 00:03:43.904 --rc geninfo_all_blocks=1 00:03:43.904 --rc geninfo_unexecuted_blocks=1 00:03:43.904 00:03:43.904 ' 00:03:43.904 14:17:24 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:43.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.904 --rc genhtml_branch_coverage=1 00:03:43.904 --rc genhtml_function_coverage=1 00:03:43.904 --rc genhtml_legend=1 00:03:43.904 --rc geninfo_all_blocks=1 00:03:43.904 --rc geninfo_unexecuted_blocks=1 00:03:43.904 00:03:43.904 ' 00:03:43.904 14:17:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:43.904 14:17:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:43.904 14:17:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:43.904 14:17:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.904 14:17:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.904 14:17:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.904 ************************************ 00:03:43.904 START TEST skip_rpc 00:03:43.904 ************************************ 00:03:43.904 14:17:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:03:43.904 14:17:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3147355 00:03:43.904 14:17:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:43.904 14:17:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:43.904 14:17:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:43.904 [2024-10-14 14:17:24.520264] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:03:43.904 [2024-10-14 14:17:24.520324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3147355 ] 00:03:43.904 [2024-10-14 14:17:24.587680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.904 [2024-10-14 14:17:24.632385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3147355 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3147355 ']' 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3147355 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3147355 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3147355' 00:03:49.186 killing process with pid 3147355 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3147355 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3147355 00:03:49.186 00:03:49.186 real 0m5.286s 00:03:49.186 user 0m5.080s 00:03:49.186 sys 0m0.250s 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.186 14:17:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.186 ************************************ 00:03:49.186 END TEST skip_rpc 00:03:49.186 ************************************ 00:03:49.186 14:17:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:49.186 14:17:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.186 14:17:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.186 14:17:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.186 ************************************ 00:03:49.186 START TEST skip_rpc_with_json 00:03:49.186 ************************************ 00:03:49.186 14:17:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:03:49.186 14:17:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:49.187 14:17:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3148583 00:03:49.187 14:17:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.187 14:17:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3148583 00:03:49.187 14:17:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:49.187 14:17:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3148583 ']' 00:03:49.187 14:17:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:49.187 14:17:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:49.187 14:17:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:49.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:49.187 14:17:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:49.187 14:17:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:49.187 [2024-10-14 14:17:29.881476] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:03:49.187 [2024-10-14 14:17:29.881530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3148583 ] 00:03:49.447 [2024-10-14 14:17:29.945413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.447 [2024-10-14 14:17:29.986581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.018 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:50.018 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:03:50.018 14:17:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:50.018 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.018 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:50.018 [2024-10-14 14:17:30.670999] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:50.018 request: 00:03:50.018 { 00:03:50.018 "trtype": "tcp", 00:03:50.018 "method": "nvmf_get_transports", 00:03:50.018 "req_id": 1 00:03:50.018 } 00:03:50.018 Got JSON-RPC error response 00:03:50.018 response: 00:03:50.018 { 00:03:50.018 "code": -19, 00:03:50.018 "message": "No such device" 00:03:50.018 } 00:03:50.018 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:50.018 14:17:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:50.018 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.018 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:50.018 [2024-10-14 14:17:30.683122] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:50.018 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.018 14:17:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:50.018 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:50.018 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:50.280 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:50.280 14:17:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:50.280 { 00:03:50.280 "subsystems": [ 00:03:50.280 { 00:03:50.280 "subsystem": "fsdev", 00:03:50.280 "config": [ 00:03:50.280 { 00:03:50.280 "method": "fsdev_set_opts", 00:03:50.280 "params": { 00:03:50.280 "fsdev_io_pool_size": 65535, 00:03:50.280 "fsdev_io_cache_size": 256 00:03:50.280 } 00:03:50.280 } 00:03:50.280 ] 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "vfio_user_target", 00:03:50.280 "config": null 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "keyring", 00:03:50.280 "config": [] 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "iobuf", 00:03:50.280 "config": [ 00:03:50.280 { 00:03:50.280 "method": "iobuf_set_options", 00:03:50.280 "params": { 00:03:50.280 "small_pool_count": 8192, 00:03:50.280 "large_pool_count": 1024, 00:03:50.280 "small_bufsize": 8192, 00:03:50.280 "large_bufsize": 135168 00:03:50.280 } 00:03:50.280 } 00:03:50.280 ] 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "sock", 00:03:50.280 "config": [ 00:03:50.280 { 00:03:50.280 "method": "sock_set_default_impl", 00:03:50.280 "params": { 00:03:50.280 "impl_name": "posix" 00:03:50.280 } 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "method": "sock_impl_set_options", 00:03:50.280 "params": { 00:03:50.280 "impl_name": "ssl", 00:03:50.280 "recv_buf_size": 4096, 00:03:50.280 "send_buf_size": 4096, 00:03:50.280 "enable_recv_pipe": true, 00:03:50.280 "enable_quickack": false, 00:03:50.280 "enable_placement_id": 0, 00:03:50.280 "enable_zerocopy_send_server": true, 00:03:50.280 "enable_zerocopy_send_client": false, 00:03:50.280 "zerocopy_threshold": 0, 00:03:50.280 "tls_version": 0, 00:03:50.280 "enable_ktls": false 00:03:50.280 } 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "method": "sock_impl_set_options", 00:03:50.280 "params": { 00:03:50.280 "impl_name": "posix", 00:03:50.280 "recv_buf_size": 2097152, 00:03:50.280 "send_buf_size": 2097152, 00:03:50.280 "enable_recv_pipe": true, 00:03:50.280 "enable_quickack": false, 00:03:50.280 "enable_placement_id": 0, 00:03:50.280 "enable_zerocopy_send_server": true, 00:03:50.280 "enable_zerocopy_send_client": false, 00:03:50.280 "zerocopy_threshold": 0, 00:03:50.280 "tls_version": 0, 00:03:50.280 "enable_ktls": false 00:03:50.280 } 00:03:50.280 } 00:03:50.280 ] 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "vmd", 00:03:50.280 "config": [] 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "accel", 00:03:50.280 "config": [ 00:03:50.280 { 00:03:50.280 "method": "accel_set_options", 00:03:50.280 "params": { 00:03:50.280 "small_cache_size": 128, 00:03:50.280 "large_cache_size": 16, 00:03:50.280 "task_count": 2048, 00:03:50.280 "sequence_count": 2048, 00:03:50.280 "buf_count": 2048 00:03:50.280 } 00:03:50.280 } 00:03:50.280 ] 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "bdev", 00:03:50.280 "config": [ 00:03:50.280 { 00:03:50.280 "method": "bdev_set_options", 00:03:50.280 "params": { 00:03:50.280 "bdev_io_pool_size": 65535, 00:03:50.280 "bdev_io_cache_size": 256, 00:03:50.280 "bdev_auto_examine": true, 00:03:50.280 "iobuf_small_cache_size": 128, 00:03:50.280 "iobuf_large_cache_size": 16 00:03:50.280 } 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "method": "bdev_raid_set_options", 00:03:50.280 "params": { 00:03:50.280 "process_window_size_kb": 1024, 00:03:50.280 "process_max_bandwidth_mb_sec": 0 00:03:50.280 } 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "method": "bdev_iscsi_set_options", 00:03:50.280 "params": { 00:03:50.280 "timeout_sec": 30 00:03:50.280 } 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "method": "bdev_nvme_set_options", 00:03:50.280 "params": { 00:03:50.280 "action_on_timeout": "none", 00:03:50.280 "timeout_us": 0, 00:03:50.280 "timeout_admin_us": 0, 00:03:50.280 "keep_alive_timeout_ms": 10000, 00:03:50.280 "arbitration_burst": 0, 00:03:50.280 "low_priority_weight": 0, 00:03:50.280 "medium_priority_weight": 0, 00:03:50.280 "high_priority_weight": 0, 00:03:50.280 "nvme_adminq_poll_period_us": 10000, 00:03:50.280 "nvme_ioq_poll_period_us": 0, 00:03:50.280 "io_queue_requests": 0, 00:03:50.280 "delay_cmd_submit": true, 00:03:50.280 "transport_retry_count": 4, 00:03:50.280 "bdev_retry_count": 3, 00:03:50.280 "transport_ack_timeout": 0, 00:03:50.280 "ctrlr_loss_timeout_sec": 0, 00:03:50.280 "reconnect_delay_sec": 0, 00:03:50.280 "fast_io_fail_timeout_sec": 0, 00:03:50.280 "disable_auto_failback": false, 00:03:50.280 "generate_uuids": false, 00:03:50.280 "transport_tos": 0, 00:03:50.280 "nvme_error_stat": false, 00:03:50.280 "rdma_srq_size": 0, 00:03:50.280 "io_path_stat": false, 00:03:50.280 "allow_accel_sequence": false, 00:03:50.280 "rdma_max_cq_size": 0, 00:03:50.280 "rdma_cm_event_timeout_ms": 0, 00:03:50.280 "dhchap_digests": [ 00:03:50.280 "sha256", 00:03:50.280 "sha384", 00:03:50.280 "sha512" 00:03:50.280 ], 00:03:50.280 "dhchap_dhgroups": [ 00:03:50.280 "null", 00:03:50.280 "ffdhe2048", 00:03:50.280 "ffdhe3072", 00:03:50.280 "ffdhe4096", 00:03:50.280 "ffdhe6144", 00:03:50.280 "ffdhe8192" 00:03:50.280 ] 00:03:50.280 } 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "method": "bdev_nvme_set_hotplug", 00:03:50.280 "params": { 00:03:50.280 "period_us": 100000, 00:03:50.280 "enable": false 00:03:50.280 } 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "method": "bdev_wait_for_examine" 00:03:50.280 } 00:03:50.280 ] 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "scsi", 00:03:50.280 "config": null 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "scheduler", 00:03:50.280 "config": [ 00:03:50.280 { 00:03:50.280 "method": "framework_set_scheduler", 00:03:50.280 "params": { 00:03:50.280 "name": "static" 00:03:50.280 } 00:03:50.280 } 00:03:50.280 ] 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "vhost_scsi", 00:03:50.280 "config": [] 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "vhost_blk", 00:03:50.280 "config": [] 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "ublk", 00:03:50.280 "config": [] 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "nbd", 00:03:50.280 "config": [] 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "nvmf", 00:03:50.280 "config": [ 00:03:50.280 { 00:03:50.280 "method": "nvmf_set_config", 00:03:50.280 "params": { 00:03:50.280 "discovery_filter": "match_any", 00:03:50.280 "admin_cmd_passthru": { 00:03:50.280 "identify_ctrlr": false 00:03:50.280 }, 00:03:50.280 "dhchap_digests": [ 00:03:50.280 "sha256", 00:03:50.280 "sha384", 00:03:50.280 "sha512" 00:03:50.280 ], 00:03:50.280 "dhchap_dhgroups": [ 00:03:50.280 "null", 00:03:50.280 "ffdhe2048", 00:03:50.280 "ffdhe3072", 00:03:50.280 "ffdhe4096", 00:03:50.280 "ffdhe6144", 00:03:50.280 "ffdhe8192" 00:03:50.280 ] 00:03:50.280 } 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "method": "nvmf_set_max_subsystems", 00:03:50.280 "params": { 00:03:50.280 "max_subsystems": 1024 00:03:50.280 } 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "method": "nvmf_set_crdt", 00:03:50.280 "params": { 00:03:50.280 "crdt1": 0, 00:03:50.280 "crdt2": 0, 00:03:50.280 "crdt3": 0 00:03:50.280 } 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "method": "nvmf_create_transport", 00:03:50.280 "params": { 00:03:50.280 "trtype": "TCP", 00:03:50.280 "max_queue_depth": 128, 00:03:50.280 "max_io_qpairs_per_ctrlr": 127, 00:03:50.280 "in_capsule_data_size": 4096, 00:03:50.280 "max_io_size": 131072, 00:03:50.280 "io_unit_size": 131072, 00:03:50.280 "max_aq_depth": 128, 00:03:50.280 "num_shared_buffers": 511, 00:03:50.280 "buf_cache_size": 4294967295, 00:03:50.280 "dif_insert_or_strip": false, 00:03:50.280 "zcopy": false, 00:03:50.280 "c2h_success": true, 00:03:50.280 "sock_priority": 0, 00:03:50.280 "abort_timeout_sec": 1, 00:03:50.280 "ack_timeout": 0, 00:03:50.280 "data_wr_pool_size": 0 00:03:50.280 } 00:03:50.280 } 00:03:50.280 ] 00:03:50.280 }, 00:03:50.280 { 00:03:50.280 "subsystem": "iscsi", 00:03:50.280 "config": [ 00:03:50.280 { 00:03:50.280 "method": "iscsi_set_options", 00:03:50.280 "params": { 00:03:50.280 "node_base": "iqn.2016-06.io.spdk", 00:03:50.280 "max_sessions": 128, 00:03:50.280 "max_connections_per_session": 2, 00:03:50.280 "max_queue_depth": 64, 00:03:50.280 "default_time2wait": 2, 00:03:50.280 "default_time2retain": 20, 00:03:50.280 "first_burst_length": 8192, 00:03:50.280 "immediate_data": true, 00:03:50.281 "allow_duplicated_isid": false, 00:03:50.281 "error_recovery_level": 0, 00:03:50.281 "nop_timeout": 60, 00:03:50.281 "nop_in_interval": 30, 00:03:50.281 "disable_chap": false, 00:03:50.281 "require_chap": false, 00:03:50.281 "mutual_chap": false, 00:03:50.281 "chap_group": 0, 00:03:50.281 "max_large_datain_per_connection": 64, 00:03:50.281 "max_r2t_per_connection": 4, 00:03:50.281 "pdu_pool_size": 36864, 00:03:50.281 "immediate_data_pool_size": 16384, 00:03:50.281 "data_out_pool_size": 2048 00:03:50.281 } 00:03:50.281 } 00:03:50.281 ] 00:03:50.281 } 00:03:50.281 ] 00:03:50.281 } 00:03:50.281 14:17:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:50.281 14:17:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3148583 00:03:50.281 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3148583 ']' 00:03:50.281 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3148583 00:03:50.281 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:50.281 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:50.281 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3148583 00:03:50.281 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:50.281 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:50.281 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3148583' 00:03:50.281 killing process with pid 3148583 00:03:50.281 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3148583 00:03:50.281 14:17:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3148583 00:03:50.541 14:17:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3148735 00:03:50.541 14:17:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:50.541 14:17:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3148735 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3148735 ']' 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3148735 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3148735 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3148735' 00:03:55.830 killing process with pid 3148735 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3148735 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3148735 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:55.830 00:03:55.830 real 0m6.592s 00:03:55.830 user 0m6.502s 00:03:55.830 sys 0m0.553s 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.830 ************************************ 00:03:55.830 END TEST skip_rpc_with_json 00:03:55.830 ************************************ 00:03:55.830 14:17:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:55.830 14:17:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.830 14:17:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.830 14:17:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.830 ************************************ 00:03:55.830 START TEST skip_rpc_with_delay 00:03:55.830 ************************************ 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:55.830 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:56.092 [2024-10-14 14:17:36.563743] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:56.092 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:56.092 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:56.092 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:56.092 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:56.092 00:03:56.092 real 0m0.088s 00:03:56.092 user 0m0.055s 00:03:56.092 sys 0m0.032s 00:03:56.092 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.092 14:17:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:56.092 ************************************ 00:03:56.092 END TEST skip_rpc_with_delay 00:03:56.092 ************************************ 00:03:56.092 14:17:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:56.092 14:17:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:56.092 14:17:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:56.092 14:17:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.092 14:17:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.092 14:17:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.092 ************************************ 00:03:56.092 START TEST exit_on_failed_rpc_init 00:03:56.092 ************************************ 00:03:56.092 14:17:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:03:56.092 14:17:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3149997 00:03:56.092 14:17:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3149997 00:03:56.092 14:17:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:56.092 14:17:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3149997 ']' 00:03:56.092 14:17:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.092 14:17:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:56.092 14:17:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.092 14:17:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:56.092 14:17:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:56.092 [2024-10-14 14:17:36.734917] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:03:56.092 [2024-10-14 14:17:36.734977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3149997 ] 00:03:56.092 [2024-10-14 14:17:36.800934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.352 [2024-10-14 14:17:36.844383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:56.923 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:56.923 [2024-10-14 14:17:37.550732] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:03:56.923 [2024-10-14 14:17:37.550781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3150012 ] 00:03:56.923 [2024-10-14 14:17:37.626975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.184 [2024-10-14 14:17:37.662996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:57.184 [2024-10-14 14:17:37.663048] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:57.184 [2024-10-14 14:17:37.663059] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:57.184 [2024-10-14 14:17:37.663071] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3149997 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3149997 ']' 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3149997 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3149997 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3149997' 00:03:57.184 killing process with pid 3149997 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3149997 00:03:57.184 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3149997 00:03:57.446 00:03:57.446 real 0m1.308s 00:03:57.446 user 0m1.519s 00:03:57.446 sys 0m0.369s 00:03:57.446 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.446 14:17:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:57.446 ************************************ 00:03:57.446 END TEST exit_on_failed_rpc_init 00:03:57.446 ************************************ 00:03:57.446 14:17:38 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.446 00:03:57.446 real 0m13.791s 00:03:57.446 user 0m13.396s 00:03:57.446 sys 0m1.511s 00:03:57.446 14:17:38 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.446 14:17:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.446 ************************************ 00:03:57.446 END TEST skip_rpc 00:03:57.446 ************************************ 00:03:57.446 14:17:38 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:57.446 14:17:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.446 14:17:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.446 14:17:38 -- common/autotest_common.sh@10 -- # set +x 00:03:57.446 ************************************ 00:03:57.446 START TEST rpc_client 00:03:57.446 ************************************ 00:03:57.446 14:17:38 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:57.708 * Looking for test storage... 00:03:57.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:57.708 14:17:38 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:57.708 14:17:38 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:03:57.708 14:17:38 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:57.708 14:17:38 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.708 14:17:38 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:57.708 14:17:38 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.708 14:17:38 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:57.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.708 --rc genhtml_branch_coverage=1 00:03:57.708 --rc genhtml_function_coverage=1 00:03:57.708 --rc genhtml_legend=1 00:03:57.708 --rc geninfo_all_blocks=1 00:03:57.708 --rc geninfo_unexecuted_blocks=1 00:03:57.708 00:03:57.708 ' 00:03:57.708 14:17:38 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:57.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.708 --rc genhtml_branch_coverage=1 00:03:57.708 --rc genhtml_function_coverage=1 00:03:57.708 --rc genhtml_legend=1 00:03:57.708 --rc geninfo_all_blocks=1 00:03:57.708 --rc geninfo_unexecuted_blocks=1 00:03:57.708 00:03:57.708 ' 00:03:57.708 14:17:38 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:57.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.708 --rc genhtml_branch_coverage=1 00:03:57.708 --rc genhtml_function_coverage=1 00:03:57.708 --rc genhtml_legend=1 00:03:57.708 --rc geninfo_all_blocks=1 00:03:57.708 --rc geninfo_unexecuted_blocks=1 00:03:57.708 00:03:57.708 ' 00:03:57.708 14:17:38 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:57.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.708 --rc genhtml_branch_coverage=1 00:03:57.708 --rc genhtml_function_coverage=1 00:03:57.708 --rc genhtml_legend=1 00:03:57.708 --rc geninfo_all_blocks=1 00:03:57.708 --rc geninfo_unexecuted_blocks=1 00:03:57.708 00:03:57.708 ' 00:03:57.708 14:17:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:57.708 OK 00:03:57.708 14:17:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:57.708 00:03:57.708 real 0m0.228s 00:03:57.708 user 0m0.122s 00:03:57.708 sys 0m0.121s 00:03:57.708 14:17:38 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.708 14:17:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:57.708 ************************************ 00:03:57.708 END TEST rpc_client 00:03:57.708 ************************************ 00:03:57.708 14:17:38 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:57.708 14:17:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.708 14:17:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.708 14:17:38 -- common/autotest_common.sh@10 -- # set +x 00:03:57.708 ************************************ 00:03:57.708 START TEST json_config 00:03:57.708 ************************************ 00:03:57.708 14:17:38 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:57.970 14:17:38 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:57.970 14:17:38 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:03:57.970 14:17:38 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:57.970 14:17:38 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:57.970 14:17:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.970 14:17:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.970 14:17:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.970 14:17:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.970 14:17:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.970 14:17:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.970 14:17:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.970 14:17:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.970 14:17:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.970 14:17:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.970 14:17:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.970 14:17:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:57.970 14:17:38 json_config -- scripts/common.sh@345 -- # : 1 00:03:57.970 14:17:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.970 14:17:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.970 14:17:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:57.970 14:17:38 json_config -- scripts/common.sh@353 -- # local d=1 00:03:57.970 14:17:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.970 14:17:38 json_config -- scripts/common.sh@355 -- # echo 1 00:03:57.970 14:17:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.970 14:17:38 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:57.970 14:17:38 json_config -- scripts/common.sh@353 -- # local d=2 00:03:57.970 14:17:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.970 14:17:38 json_config -- scripts/common.sh@355 -- # echo 2 00:03:57.970 14:17:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.970 14:17:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.970 14:17:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.970 14:17:38 json_config -- scripts/common.sh@368 -- # return 0 00:03:57.970 14:17:38 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.970 14:17:38 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:57.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.970 --rc genhtml_branch_coverage=1 00:03:57.970 --rc genhtml_function_coverage=1 00:03:57.970 --rc genhtml_legend=1 00:03:57.970 --rc geninfo_all_blocks=1 00:03:57.970 --rc geninfo_unexecuted_blocks=1 00:03:57.970 00:03:57.970 ' 00:03:57.970 14:17:38 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:57.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.970 --rc genhtml_branch_coverage=1 00:03:57.970 --rc genhtml_function_coverage=1 00:03:57.970 --rc genhtml_legend=1 00:03:57.970 --rc geninfo_all_blocks=1 00:03:57.970 --rc geninfo_unexecuted_blocks=1 00:03:57.970 00:03:57.970 ' 00:03:57.970 14:17:38 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:57.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.970 --rc genhtml_branch_coverage=1 00:03:57.970 --rc genhtml_function_coverage=1 00:03:57.970 --rc genhtml_legend=1 00:03:57.970 --rc geninfo_all_blocks=1 00:03:57.970 --rc geninfo_unexecuted_blocks=1 00:03:57.970 00:03:57.970 ' 00:03:57.971 14:17:38 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:57.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.971 --rc genhtml_branch_coverage=1 00:03:57.971 --rc genhtml_function_coverage=1 00:03:57.971 --rc genhtml_legend=1 00:03:57.971 --rc geninfo_all_blocks=1 00:03:57.971 --rc geninfo_unexecuted_blocks=1 00:03:57.971 00:03:57.971 ' 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.971 14:17:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:57.971 14:17:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.971 14:17:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.971 14:17:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.971 14:17:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.971 14:17:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.971 14:17:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.971 14:17:38 json_config -- paths/export.sh@5 -- # export PATH 00:03:57.971 14:17:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@51 -- # : 0 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:57.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:57.971 14:17:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:57.971 INFO: JSON configuration test init 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:57.971 14:17:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:57.971 14:17:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:57.971 14:17:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:57.971 14:17:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.971 14:17:38 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:57.971 14:17:38 json_config -- json_config/common.sh@9 -- # local app=target 00:03:57.971 14:17:38 json_config -- json_config/common.sh@10 -- # shift 00:03:57.971 14:17:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:57.971 14:17:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:57.971 14:17:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:57.971 14:17:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.971 14:17:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.971 14:17:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3150464 00:03:57.971 14:17:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:57.971 Waiting for target to run... 00:03:57.971 14:17:38 json_config -- json_config/common.sh@25 -- # waitforlisten 3150464 /var/tmp/spdk_tgt.sock 00:03:57.971 14:17:38 json_config -- common/autotest_common.sh@831 -- # '[' -z 3150464 ']' 00:03:57.971 14:17:38 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:57.971 14:17:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:57.971 14:17:38 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:57.971 14:17:38 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:57.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:57.971 14:17:38 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:57.971 14:17:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.971 [2024-10-14 14:17:38.681555] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:03:57.971 [2024-10-14 14:17:38.681630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3150464 ] 00:03:58.232 [2024-10-14 14:17:38.937213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.493 [2024-10-14 14:17:38.965608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.754 14:17:39 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:58.754 14:17:39 json_config -- common/autotest_common.sh@864 -- # return 0 00:03:58.754 14:17:39 json_config -- json_config/common.sh@26 -- # echo '' 00:03:58.754 00:03:58.754 14:17:39 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:58.754 14:17:39 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:58.754 14:17:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:58.754 14:17:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.754 14:17:39 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:58.754 14:17:39 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:58.754 14:17:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:58.754 14:17:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.016 14:17:39 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:59.016 14:17:39 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:59.016 14:17:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:59.588 14:17:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.588 14:17:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:59.588 14:17:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@54 -- # sort 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:59.588 14:17:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:59.588 14:17:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:59.588 14:17:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.588 14:17:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:59.588 14:17:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:59.588 14:17:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:59.849 MallocForNvmf0 00:03:59.849 14:17:40 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:59.849 14:17:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:00.109 MallocForNvmf1 00:04:00.109 14:17:40 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:00.109 14:17:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:00.109 [2024-10-14 14:17:40.822821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:00.370 14:17:40 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:00.370 14:17:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:00.370 14:17:41 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:00.370 14:17:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:00.630 14:17:41 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:00.630 14:17:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:00.890 14:17:41 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:00.890 14:17:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:00.890 [2024-10-14 14:17:41.541098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:00.890 14:17:41 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:00.890 14:17:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:00.890 14:17:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:00.890 14:17:41 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:00.890 14:17:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:00.890 14:17:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.150 14:17:41 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:01.150 14:17:41 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:01.150 14:17:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:01.150 MallocBdevForConfigChangeCheck 00:04:01.150 14:17:41 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:01.150 14:17:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:01.150 14:17:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.150 14:17:41 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:01.150 14:17:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:01.720 14:17:42 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:01.720 INFO: shutting down applications... 00:04:01.720 14:17:42 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:01.720 14:17:42 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:01.720 14:17:42 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:01.720 14:17:42 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:01.980 Calling clear_iscsi_subsystem 00:04:01.980 Calling clear_nvmf_subsystem 00:04:01.980 Calling clear_nbd_subsystem 00:04:01.980 Calling clear_ublk_subsystem 00:04:01.980 Calling clear_vhost_blk_subsystem 00:04:01.980 Calling clear_vhost_scsi_subsystem 00:04:01.980 Calling clear_bdev_subsystem 00:04:01.980 14:17:42 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:01.980 14:17:42 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:01.980 14:17:42 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:01.980 14:17:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:01.980 14:17:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:01.980 14:17:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:02.240 14:17:42 json_config -- json_config/json_config.sh@352 -- # break 00:04:02.240 14:17:42 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:02.240 14:17:42 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:02.240 14:17:42 json_config -- json_config/common.sh@31 -- # local app=target 00:04:02.240 14:17:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:02.240 14:17:42 json_config -- json_config/common.sh@35 -- # [[ -n 3150464 ]] 00:04:02.240 14:17:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3150464 00:04:02.240 14:17:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:02.240 14:17:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:02.240 14:17:42 json_config -- json_config/common.sh@41 -- # kill -0 3150464 00:04:02.240 14:17:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:02.811 14:17:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:02.811 14:17:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:02.811 14:17:43 json_config -- json_config/common.sh@41 -- # kill -0 3150464 00:04:02.811 14:17:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:02.811 14:17:43 json_config -- json_config/common.sh@43 -- # break 00:04:02.811 14:17:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:02.811 14:17:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:02.811 SPDK target shutdown done 00:04:02.811 14:17:43 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:02.811 INFO: relaunching applications... 00:04:02.811 14:17:43 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.811 14:17:43 json_config -- json_config/common.sh@9 -- # local app=target 00:04:02.811 14:17:43 json_config -- json_config/common.sh@10 -- # shift 00:04:02.811 14:17:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:02.811 14:17:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:02.811 14:17:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:02.811 14:17:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:02.811 14:17:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:02.811 14:17:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3151602 00:04:02.811 14:17:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:02.811 Waiting for target to run... 00:04:02.811 14:17:43 json_config -- json_config/common.sh@25 -- # waitforlisten 3151602 /var/tmp/spdk_tgt.sock 00:04:02.811 14:17:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.811 14:17:43 json_config -- common/autotest_common.sh@831 -- # '[' -z 3151602 ']' 00:04:02.811 14:17:43 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:02.811 14:17:43 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:02.811 14:17:43 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:02.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:02.811 14:17:43 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:02.811 14:17:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.811 [2024-10-14 14:17:43.534906] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:02.811 [2024-10-14 14:17:43.534982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3151602 ] 00:04:03.381 [2024-10-14 14:17:43.811687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.381 [2024-10-14 14:17:43.841662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.641 [2024-10-14 14:17:44.360900] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:03.902 [2024-10-14 14:17:44.393266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:03.902 14:17:44 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:03.902 14:17:44 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:03.902 14:17:44 json_config -- json_config/common.sh@26 -- # echo '' 00:04:03.902 00:04:03.902 14:17:44 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:03.902 14:17:44 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:03.902 INFO: Checking if target configuration is the same... 00:04:03.902 14:17:44 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.902 14:17:44 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:03.902 14:17:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.902 + '[' 2 -ne 2 ']' 00:04:03.902 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:03.902 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:03.902 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.902 +++ basename /dev/fd/62 00:04:03.902 ++ mktemp /tmp/62.XXX 00:04:03.902 + tmp_file_1=/tmp/62.cwb 00:04:03.902 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.902 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:03.902 + tmp_file_2=/tmp/spdk_tgt_config.json.eRJ 00:04:03.902 + ret=0 00:04:03.902 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.195 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.195 + diff -u /tmp/62.cwb /tmp/spdk_tgt_config.json.eRJ 00:04:04.195 + echo 'INFO: JSON config files are the same' 00:04:04.195 INFO: JSON config files are the same 00:04:04.195 + rm /tmp/62.cwb /tmp/spdk_tgt_config.json.eRJ 00:04:04.195 + exit 0 00:04:04.195 14:17:44 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:04.195 14:17:44 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:04.195 INFO: changing configuration and checking if this can be detected... 00:04:04.195 14:17:44 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:04.195 14:17:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:04.507 14:17:44 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:04.507 14:17:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:04.507 14:17:44 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.507 + '[' 2 -ne 2 ']' 00:04:04.507 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:04.507 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:04.507 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:04.507 +++ basename /dev/fd/62 00:04:04.507 ++ mktemp /tmp/62.XXX 00:04:04.507 + tmp_file_1=/tmp/62.q0f 00:04:04.507 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:04.507 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:04.507 + tmp_file_2=/tmp/spdk_tgt_config.json.HMR 00:04:04.507 + ret=0 00:04:04.507 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.852 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:04.852 + diff -u /tmp/62.q0f /tmp/spdk_tgt_config.json.HMR 00:04:04.852 + ret=1 00:04:04.852 + echo '=== Start of file: /tmp/62.q0f ===' 00:04:04.852 + cat /tmp/62.q0f 00:04:04.852 + echo '=== End of file: /tmp/62.q0f ===' 00:04:04.852 + echo '' 00:04:04.852 + echo '=== Start of file: /tmp/spdk_tgt_config.json.HMR ===' 00:04:04.852 + cat /tmp/spdk_tgt_config.json.HMR 00:04:04.852 + echo '=== End of file: /tmp/spdk_tgt_config.json.HMR ===' 00:04:04.852 + echo '' 00:04:04.852 + rm /tmp/62.q0f /tmp/spdk_tgt_config.json.HMR 00:04:04.852 + exit 1 00:04:04.852 14:17:45 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:04.852 INFO: configuration change detected. 00:04:04.852 14:17:45 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:04.852 14:17:45 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.853 14:17:45 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:04.853 14:17:45 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:04.853 14:17:45 json_config -- json_config/json_config.sh@324 -- # [[ -n 3151602 ]] 00:04:04.853 14:17:45 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:04.853 14:17:45 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.853 14:17:45 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:04.853 14:17:45 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:04.853 14:17:45 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:04.853 14:17:45 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:04.853 14:17:45 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:04.853 14:17:45 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.853 14:17:45 json_config -- json_config/json_config.sh@330 -- # killprocess 3151602 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@950 -- # '[' -z 3151602 ']' 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@954 -- # kill -0 3151602 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@955 -- # uname 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3151602 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3151602' 00:04:04.853 killing process with pid 3151602 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@969 -- # kill 3151602 00:04:04.853 14:17:45 json_config -- common/autotest_common.sh@974 -- # wait 3151602 00:04:05.167 14:17:45 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.167 14:17:45 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:05.167 14:17:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:05.167 14:17:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.167 14:17:45 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:05.167 14:17:45 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:05.167 INFO: Success 00:04:05.167 00:04:05.167 real 0m7.418s 00:04:05.167 user 0m9.033s 00:04:05.167 sys 0m1.923s 00:04:05.167 14:17:45 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.167 14:17:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.167 ************************************ 00:04:05.167 END TEST json_config 00:04:05.167 ************************************ 00:04:05.167 14:17:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:05.167 14:17:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.167 14:17:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.167 14:17:45 -- common/autotest_common.sh@10 -- # set +x 00:04:05.167 ************************************ 00:04:05.167 START TEST json_config_extra_key 00:04:05.167 ************************************ 00:04:05.167 14:17:45 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:05.429 14:17:45 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:05.429 14:17:45 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:05.429 14:17:45 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:05.429 14:17:46 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:05.429 14:17:46 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.429 14:17:46 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:05.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.429 --rc genhtml_branch_coverage=1 00:04:05.429 --rc genhtml_function_coverage=1 00:04:05.429 --rc genhtml_legend=1 00:04:05.429 --rc geninfo_all_blocks=1 00:04:05.429 --rc geninfo_unexecuted_blocks=1 00:04:05.429 00:04:05.429 ' 00:04:05.429 14:17:46 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:05.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.429 --rc genhtml_branch_coverage=1 00:04:05.429 --rc genhtml_function_coverage=1 00:04:05.429 --rc genhtml_legend=1 00:04:05.429 --rc geninfo_all_blocks=1 00:04:05.429 --rc geninfo_unexecuted_blocks=1 00:04:05.429 00:04:05.429 ' 00:04:05.429 14:17:46 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:05.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.429 --rc genhtml_branch_coverage=1 00:04:05.429 --rc genhtml_function_coverage=1 00:04:05.429 --rc genhtml_legend=1 00:04:05.429 --rc geninfo_all_blocks=1 00:04:05.429 --rc geninfo_unexecuted_blocks=1 00:04:05.429 00:04:05.429 ' 00:04:05.429 14:17:46 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:05.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.429 --rc genhtml_branch_coverage=1 00:04:05.429 --rc genhtml_function_coverage=1 00:04:05.429 --rc genhtml_legend=1 00:04:05.429 --rc geninfo_all_blocks=1 00:04:05.429 --rc geninfo_unexecuted_blocks=1 00:04:05.429 00:04:05.429 ' 00:04:05.429 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.429 14:17:46 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.429 14:17:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.429 14:17:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.429 14:17:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.429 14:17:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:05.429 14:17:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:05.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:05.429 14:17:46 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:05.429 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:05.429 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:05.429 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:05.429 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:05.429 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:05.429 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:05.429 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:05.429 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:05.429 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:05.429 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:05.429 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:05.429 INFO: launching applications... 00:04:05.429 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:05.429 14:17:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:05.430 14:17:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:05.430 14:17:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:05.430 14:17:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:05.430 14:17:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:05.430 14:17:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:05.430 14:17:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:05.430 14:17:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3152151 00:04:05.430 14:17:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:05.430 Waiting for target to run... 00:04:05.430 14:17:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3152151 /var/tmp/spdk_tgt.sock 00:04:05.430 14:17:46 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3152151 ']' 00:04:05.430 14:17:46 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:05.430 14:17:46 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:05.430 14:17:46 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:05.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:05.430 14:17:46 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:05.430 14:17:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:05.430 14:17:46 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:05.430 [2024-10-14 14:17:46.140940] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:05.430 [2024-10-14 14:17:46.141013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3152151 ] 00:04:06.001 [2024-10-14 14:17:46.556566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.001 [2024-10-14 14:17:46.593122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.261 14:17:46 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:06.261 14:17:46 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:06.261 14:17:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:06.261 00:04:06.261 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:06.261 INFO: shutting down applications... 00:04:06.261 14:17:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:06.261 14:17:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:06.261 14:17:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:06.261 14:17:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3152151 ]] 00:04:06.261 14:17:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3152151 00:04:06.261 14:17:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:06.261 14:17:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.261 14:17:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3152151 00:04:06.261 14:17:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:06.832 14:17:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:06.832 14:17:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.832 14:17:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3152151 00:04:06.832 14:17:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:06.832 14:17:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:06.832 14:17:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:06.832 14:17:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:06.832 SPDK target shutdown done 00:04:06.832 14:17:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:06.832 Success 00:04:06.832 00:04:06.832 real 0m1.570s 00:04:06.832 user 0m1.085s 00:04:06.832 sys 0m0.545s 00:04:06.832 14:17:47 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.832 14:17:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:06.832 ************************************ 00:04:06.832 END TEST json_config_extra_key 00:04:06.832 ************************************ 00:04:06.832 14:17:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:06.832 14:17:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.832 14:17:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.832 14:17:47 -- common/autotest_common.sh@10 -- # set +x 00:04:06.832 ************************************ 00:04:06.832 START TEST alias_rpc 00:04:06.832 ************************************ 00:04:06.832 14:17:47 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:07.093 * Looking for test storage... 00:04:07.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.093 14:17:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:07.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.093 --rc genhtml_branch_coverage=1 00:04:07.093 --rc genhtml_function_coverage=1 00:04:07.093 --rc genhtml_legend=1 00:04:07.093 --rc geninfo_all_blocks=1 00:04:07.093 --rc geninfo_unexecuted_blocks=1 00:04:07.093 00:04:07.093 ' 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:07.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.093 --rc genhtml_branch_coverage=1 00:04:07.093 --rc genhtml_function_coverage=1 00:04:07.093 --rc genhtml_legend=1 00:04:07.093 --rc geninfo_all_blocks=1 00:04:07.093 --rc geninfo_unexecuted_blocks=1 00:04:07.093 00:04:07.093 ' 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:07.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.093 --rc genhtml_branch_coverage=1 00:04:07.093 --rc genhtml_function_coverage=1 00:04:07.093 --rc genhtml_legend=1 00:04:07.093 --rc geninfo_all_blocks=1 00:04:07.093 --rc geninfo_unexecuted_blocks=1 00:04:07.093 00:04:07.093 ' 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:07.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.093 --rc genhtml_branch_coverage=1 00:04:07.093 --rc genhtml_function_coverage=1 00:04:07.093 --rc genhtml_legend=1 00:04:07.093 --rc geninfo_all_blocks=1 00:04:07.093 --rc geninfo_unexecuted_blocks=1 00:04:07.093 00:04:07.093 ' 00:04:07.093 14:17:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:07.093 14:17:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3152521 00:04:07.093 14:17:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3152521 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3152521 ']' 00:04:07.093 14:17:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:07.093 14:17:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.093 [2024-10-14 14:17:47.803570] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:07.093 [2024-10-14 14:17:47.803648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3152521 ] 00:04:07.360 [2024-10-14 14:17:47.870131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.360 [2024-10-14 14:17:47.914538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.929 14:17:48 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:07.929 14:17:48 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:07.929 14:17:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:08.189 14:17:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3152521 00:04:08.189 14:17:48 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3152521 ']' 00:04:08.189 14:17:48 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3152521 00:04:08.189 14:17:48 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:08.189 14:17:48 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:08.189 14:17:48 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3152521 00:04:08.189 14:17:48 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:08.189 14:17:48 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:08.189 14:17:48 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3152521' 00:04:08.189 killing process with pid 3152521 00:04:08.189 14:17:48 alias_rpc -- common/autotest_common.sh@969 -- # kill 3152521 00:04:08.189 14:17:48 alias_rpc -- common/autotest_common.sh@974 -- # wait 3152521 00:04:08.449 00:04:08.449 real 0m1.501s 00:04:08.449 user 0m1.608s 00:04:08.449 sys 0m0.434s 00:04:08.449 14:17:49 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.449 14:17:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.449 ************************************ 00:04:08.449 END TEST alias_rpc 00:04:08.449 ************************************ 00:04:08.449 14:17:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:08.449 14:17:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:08.449 14:17:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.449 14:17:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.449 14:17:49 -- common/autotest_common.sh@10 -- # set +x 00:04:08.449 ************************************ 00:04:08.449 START TEST spdkcli_tcp 00:04:08.449 ************************************ 00:04:08.449 14:17:49 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:08.709 * Looking for test storage... 00:04:08.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.710 14:17:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:08.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.710 --rc genhtml_branch_coverage=1 00:04:08.710 --rc genhtml_function_coverage=1 00:04:08.710 --rc genhtml_legend=1 00:04:08.710 --rc geninfo_all_blocks=1 00:04:08.710 --rc geninfo_unexecuted_blocks=1 00:04:08.710 00:04:08.710 ' 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:08.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.710 --rc genhtml_branch_coverage=1 00:04:08.710 --rc genhtml_function_coverage=1 00:04:08.710 --rc genhtml_legend=1 00:04:08.710 --rc geninfo_all_blocks=1 00:04:08.710 --rc geninfo_unexecuted_blocks=1 00:04:08.710 00:04:08.710 ' 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:08.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.710 --rc genhtml_branch_coverage=1 00:04:08.710 --rc genhtml_function_coverage=1 00:04:08.710 --rc genhtml_legend=1 00:04:08.710 --rc geninfo_all_blocks=1 00:04:08.710 --rc geninfo_unexecuted_blocks=1 00:04:08.710 00:04:08.710 ' 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:08.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.710 --rc genhtml_branch_coverage=1 00:04:08.710 --rc genhtml_function_coverage=1 00:04:08.710 --rc genhtml_legend=1 00:04:08.710 --rc geninfo_all_blocks=1 00:04:08.710 --rc geninfo_unexecuted_blocks=1 00:04:08.710 00:04:08.710 ' 00:04:08.710 14:17:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:08.710 14:17:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:08.710 14:17:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:08.710 14:17:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:08.710 14:17:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:08.710 14:17:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:08.710 14:17:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:08.710 14:17:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3152884 00:04:08.710 14:17:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3152884 00:04:08.710 14:17:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3152884 ']' 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:08.710 14:17:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:08.710 [2024-10-14 14:17:49.376525] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:08.710 [2024-10-14 14:17:49.376595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3152884 ] 00:04:08.970 [2024-10-14 14:17:49.445581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:08.970 [2024-10-14 14:17:49.491517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:08.970 [2024-10-14 14:17:49.491519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.540 14:17:50 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:09.540 14:17:50 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:09.540 14:17:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3153203 00:04:09.540 14:17:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:09.540 14:17:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:09.800 [ 00:04:09.800 "bdev_malloc_delete", 00:04:09.800 "bdev_malloc_create", 00:04:09.800 "bdev_null_resize", 00:04:09.800 "bdev_null_delete", 00:04:09.800 "bdev_null_create", 00:04:09.800 "bdev_nvme_cuse_unregister", 00:04:09.800 "bdev_nvme_cuse_register", 00:04:09.800 "bdev_opal_new_user", 00:04:09.800 "bdev_opal_set_lock_state", 00:04:09.800 "bdev_opal_delete", 00:04:09.800 "bdev_opal_get_info", 00:04:09.800 "bdev_opal_create", 00:04:09.800 "bdev_nvme_opal_revert", 00:04:09.800 "bdev_nvme_opal_init", 00:04:09.800 "bdev_nvme_send_cmd", 00:04:09.800 "bdev_nvme_set_keys", 00:04:09.800 "bdev_nvme_get_path_iostat", 00:04:09.800 "bdev_nvme_get_mdns_discovery_info", 00:04:09.800 "bdev_nvme_stop_mdns_discovery", 00:04:09.800 "bdev_nvme_start_mdns_discovery", 00:04:09.800 "bdev_nvme_set_multipath_policy", 00:04:09.800 "bdev_nvme_set_preferred_path", 00:04:09.800 "bdev_nvme_get_io_paths", 00:04:09.800 "bdev_nvme_remove_error_injection", 00:04:09.800 "bdev_nvme_add_error_injection", 00:04:09.800 "bdev_nvme_get_discovery_info", 00:04:09.800 "bdev_nvme_stop_discovery", 00:04:09.800 "bdev_nvme_start_discovery", 00:04:09.800 "bdev_nvme_get_controller_health_info", 00:04:09.800 "bdev_nvme_disable_controller", 00:04:09.800 "bdev_nvme_enable_controller", 00:04:09.800 "bdev_nvme_reset_controller", 00:04:09.800 "bdev_nvme_get_transport_statistics", 00:04:09.800 "bdev_nvme_apply_firmware", 00:04:09.800 "bdev_nvme_detach_controller", 00:04:09.800 "bdev_nvme_get_controllers", 00:04:09.800 "bdev_nvme_attach_controller", 00:04:09.800 "bdev_nvme_set_hotplug", 00:04:09.800 "bdev_nvme_set_options", 00:04:09.800 "bdev_passthru_delete", 00:04:09.800 "bdev_passthru_create", 00:04:09.800 "bdev_lvol_set_parent_bdev", 00:04:09.800 "bdev_lvol_set_parent", 00:04:09.800 "bdev_lvol_check_shallow_copy", 00:04:09.800 "bdev_lvol_start_shallow_copy", 00:04:09.800 "bdev_lvol_grow_lvstore", 00:04:09.800 "bdev_lvol_get_lvols", 00:04:09.800 "bdev_lvol_get_lvstores", 00:04:09.800 "bdev_lvol_delete", 00:04:09.800 "bdev_lvol_set_read_only", 00:04:09.800 "bdev_lvol_resize", 00:04:09.800 "bdev_lvol_decouple_parent", 00:04:09.800 "bdev_lvol_inflate", 00:04:09.800 "bdev_lvol_rename", 00:04:09.800 "bdev_lvol_clone_bdev", 00:04:09.800 "bdev_lvol_clone", 00:04:09.801 "bdev_lvol_snapshot", 00:04:09.801 "bdev_lvol_create", 00:04:09.801 "bdev_lvol_delete_lvstore", 00:04:09.801 "bdev_lvol_rename_lvstore", 00:04:09.801 "bdev_lvol_create_lvstore", 00:04:09.801 "bdev_raid_set_options", 00:04:09.801 "bdev_raid_remove_base_bdev", 00:04:09.801 "bdev_raid_add_base_bdev", 00:04:09.801 "bdev_raid_delete", 00:04:09.801 "bdev_raid_create", 00:04:09.801 "bdev_raid_get_bdevs", 00:04:09.801 "bdev_error_inject_error", 00:04:09.801 "bdev_error_delete", 00:04:09.801 "bdev_error_create", 00:04:09.801 "bdev_split_delete", 00:04:09.801 "bdev_split_create", 00:04:09.801 "bdev_delay_delete", 00:04:09.801 "bdev_delay_create", 00:04:09.801 "bdev_delay_update_latency", 00:04:09.801 "bdev_zone_block_delete", 00:04:09.801 "bdev_zone_block_create", 00:04:09.801 "blobfs_create", 00:04:09.801 "blobfs_detect", 00:04:09.801 "blobfs_set_cache_size", 00:04:09.801 "bdev_aio_delete", 00:04:09.801 "bdev_aio_rescan", 00:04:09.801 "bdev_aio_create", 00:04:09.801 "bdev_ftl_set_property", 00:04:09.801 "bdev_ftl_get_properties", 00:04:09.801 "bdev_ftl_get_stats", 00:04:09.801 "bdev_ftl_unmap", 00:04:09.801 "bdev_ftl_unload", 00:04:09.801 "bdev_ftl_delete", 00:04:09.801 "bdev_ftl_load", 00:04:09.801 "bdev_ftl_create", 00:04:09.801 "bdev_virtio_attach_controller", 00:04:09.801 "bdev_virtio_scsi_get_devices", 00:04:09.801 "bdev_virtio_detach_controller", 00:04:09.801 "bdev_virtio_blk_set_hotplug", 00:04:09.801 "bdev_iscsi_delete", 00:04:09.801 "bdev_iscsi_create", 00:04:09.801 "bdev_iscsi_set_options", 00:04:09.801 "accel_error_inject_error", 00:04:09.801 "ioat_scan_accel_module", 00:04:09.801 "dsa_scan_accel_module", 00:04:09.801 "iaa_scan_accel_module", 00:04:09.801 "vfu_virtio_create_fs_endpoint", 00:04:09.801 "vfu_virtio_create_scsi_endpoint", 00:04:09.801 "vfu_virtio_scsi_remove_target", 00:04:09.801 "vfu_virtio_scsi_add_target", 00:04:09.801 "vfu_virtio_create_blk_endpoint", 00:04:09.801 "vfu_virtio_delete_endpoint", 00:04:09.801 "keyring_file_remove_key", 00:04:09.801 "keyring_file_add_key", 00:04:09.801 "keyring_linux_set_options", 00:04:09.801 "fsdev_aio_delete", 00:04:09.801 "fsdev_aio_create", 00:04:09.801 "iscsi_get_histogram", 00:04:09.801 "iscsi_enable_histogram", 00:04:09.801 "iscsi_set_options", 00:04:09.801 "iscsi_get_auth_groups", 00:04:09.801 "iscsi_auth_group_remove_secret", 00:04:09.801 "iscsi_auth_group_add_secret", 00:04:09.801 "iscsi_delete_auth_group", 00:04:09.801 "iscsi_create_auth_group", 00:04:09.801 "iscsi_set_discovery_auth", 00:04:09.801 "iscsi_get_options", 00:04:09.801 "iscsi_target_node_request_logout", 00:04:09.801 "iscsi_target_node_set_redirect", 00:04:09.801 "iscsi_target_node_set_auth", 00:04:09.801 "iscsi_target_node_add_lun", 00:04:09.801 "iscsi_get_stats", 00:04:09.801 "iscsi_get_connections", 00:04:09.801 "iscsi_portal_group_set_auth", 00:04:09.801 "iscsi_start_portal_group", 00:04:09.801 "iscsi_delete_portal_group", 00:04:09.801 "iscsi_create_portal_group", 00:04:09.801 "iscsi_get_portal_groups", 00:04:09.801 "iscsi_delete_target_node", 00:04:09.801 "iscsi_target_node_remove_pg_ig_maps", 00:04:09.801 "iscsi_target_node_add_pg_ig_maps", 00:04:09.801 "iscsi_create_target_node", 00:04:09.801 "iscsi_get_target_nodes", 00:04:09.801 "iscsi_delete_initiator_group", 00:04:09.801 "iscsi_initiator_group_remove_initiators", 00:04:09.801 "iscsi_initiator_group_add_initiators", 00:04:09.801 "iscsi_create_initiator_group", 00:04:09.801 "iscsi_get_initiator_groups", 00:04:09.801 "nvmf_set_crdt", 00:04:09.801 "nvmf_set_config", 00:04:09.801 "nvmf_set_max_subsystems", 00:04:09.801 "nvmf_stop_mdns_prr", 00:04:09.801 "nvmf_publish_mdns_prr", 00:04:09.801 "nvmf_subsystem_get_listeners", 00:04:09.801 "nvmf_subsystem_get_qpairs", 00:04:09.801 "nvmf_subsystem_get_controllers", 00:04:09.801 "nvmf_get_stats", 00:04:09.801 "nvmf_get_transports", 00:04:09.801 "nvmf_create_transport", 00:04:09.801 "nvmf_get_targets", 00:04:09.801 "nvmf_delete_target", 00:04:09.801 "nvmf_create_target", 00:04:09.801 "nvmf_subsystem_allow_any_host", 00:04:09.801 "nvmf_subsystem_set_keys", 00:04:09.801 "nvmf_subsystem_remove_host", 00:04:09.801 "nvmf_subsystem_add_host", 00:04:09.801 "nvmf_ns_remove_host", 00:04:09.801 "nvmf_ns_add_host", 00:04:09.801 "nvmf_subsystem_remove_ns", 00:04:09.801 "nvmf_subsystem_set_ns_ana_group", 00:04:09.801 "nvmf_subsystem_add_ns", 00:04:09.801 "nvmf_subsystem_listener_set_ana_state", 00:04:09.801 "nvmf_discovery_get_referrals", 00:04:09.801 "nvmf_discovery_remove_referral", 00:04:09.801 "nvmf_discovery_add_referral", 00:04:09.801 "nvmf_subsystem_remove_listener", 00:04:09.801 "nvmf_subsystem_add_listener", 00:04:09.801 "nvmf_delete_subsystem", 00:04:09.801 "nvmf_create_subsystem", 00:04:09.801 "nvmf_get_subsystems", 00:04:09.801 "env_dpdk_get_mem_stats", 00:04:09.801 "nbd_get_disks", 00:04:09.801 "nbd_stop_disk", 00:04:09.801 "nbd_start_disk", 00:04:09.801 "ublk_recover_disk", 00:04:09.801 "ublk_get_disks", 00:04:09.801 "ublk_stop_disk", 00:04:09.801 "ublk_start_disk", 00:04:09.801 "ublk_destroy_target", 00:04:09.801 "ublk_create_target", 00:04:09.801 "virtio_blk_create_transport", 00:04:09.801 "virtio_blk_get_transports", 00:04:09.801 "vhost_controller_set_coalescing", 00:04:09.801 "vhost_get_controllers", 00:04:09.801 "vhost_delete_controller", 00:04:09.801 "vhost_create_blk_controller", 00:04:09.801 "vhost_scsi_controller_remove_target", 00:04:09.801 "vhost_scsi_controller_add_target", 00:04:09.801 "vhost_start_scsi_controller", 00:04:09.801 "vhost_create_scsi_controller", 00:04:09.801 "thread_set_cpumask", 00:04:09.801 "scheduler_set_options", 00:04:09.801 "framework_get_governor", 00:04:09.801 "framework_get_scheduler", 00:04:09.801 "framework_set_scheduler", 00:04:09.801 "framework_get_reactors", 00:04:09.801 "thread_get_io_channels", 00:04:09.801 "thread_get_pollers", 00:04:09.801 "thread_get_stats", 00:04:09.801 "framework_monitor_context_switch", 00:04:09.801 "spdk_kill_instance", 00:04:09.801 "log_enable_timestamps", 00:04:09.801 "log_get_flags", 00:04:09.801 "log_clear_flag", 00:04:09.801 "log_set_flag", 00:04:09.801 "log_get_level", 00:04:09.801 "log_set_level", 00:04:09.801 "log_get_print_level", 00:04:09.801 "log_set_print_level", 00:04:09.801 "framework_enable_cpumask_locks", 00:04:09.801 "framework_disable_cpumask_locks", 00:04:09.801 "framework_wait_init", 00:04:09.801 "framework_start_init", 00:04:09.801 "scsi_get_devices", 00:04:09.801 "bdev_get_histogram", 00:04:09.801 "bdev_enable_histogram", 00:04:09.801 "bdev_set_qos_limit", 00:04:09.801 "bdev_set_qd_sampling_period", 00:04:09.801 "bdev_get_bdevs", 00:04:09.801 "bdev_reset_iostat", 00:04:09.801 "bdev_get_iostat", 00:04:09.801 "bdev_examine", 00:04:09.801 "bdev_wait_for_examine", 00:04:09.801 "bdev_set_options", 00:04:09.801 "accel_get_stats", 00:04:09.801 "accel_set_options", 00:04:09.801 "accel_set_driver", 00:04:09.801 "accel_crypto_key_destroy", 00:04:09.801 "accel_crypto_keys_get", 00:04:09.801 "accel_crypto_key_create", 00:04:09.801 "accel_assign_opc", 00:04:09.801 "accel_get_module_info", 00:04:09.801 "accel_get_opc_assignments", 00:04:09.801 "vmd_rescan", 00:04:09.801 "vmd_remove_device", 00:04:09.801 "vmd_enable", 00:04:09.801 "sock_get_default_impl", 00:04:09.801 "sock_set_default_impl", 00:04:09.801 "sock_impl_set_options", 00:04:09.801 "sock_impl_get_options", 00:04:09.801 "iobuf_get_stats", 00:04:09.801 "iobuf_set_options", 00:04:09.801 "keyring_get_keys", 00:04:09.801 "vfu_tgt_set_base_path", 00:04:09.801 "framework_get_pci_devices", 00:04:09.801 "framework_get_config", 00:04:09.801 "framework_get_subsystems", 00:04:09.801 "fsdev_set_opts", 00:04:09.801 "fsdev_get_opts", 00:04:09.801 "trace_get_info", 00:04:09.801 "trace_get_tpoint_group_mask", 00:04:09.801 "trace_disable_tpoint_group", 00:04:09.801 "trace_enable_tpoint_group", 00:04:09.801 "trace_clear_tpoint_mask", 00:04:09.801 "trace_set_tpoint_mask", 00:04:09.801 "notify_get_notifications", 00:04:09.801 "notify_get_types", 00:04:09.801 "spdk_get_version", 00:04:09.801 "rpc_get_methods" 00:04:09.801 ] 00:04:09.801 14:17:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:09.801 14:17:50 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:09.801 14:17:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.801 14:17:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:09.801 14:17:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3152884 00:04:09.801 14:17:50 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3152884 ']' 00:04:09.801 14:17:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3152884 00:04:09.801 14:17:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:09.801 14:17:50 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:09.801 14:17:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3152884 00:04:09.801 14:17:50 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:09.801 14:17:50 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:09.801 14:17:50 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3152884' 00:04:09.801 killing process with pid 3152884 00:04:09.801 14:17:50 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3152884 00:04:09.801 14:17:50 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3152884 00:04:10.063 00:04:10.063 real 0m1.544s 00:04:10.063 user 0m2.819s 00:04:10.063 sys 0m0.458s 00:04:10.063 14:17:50 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:10.063 14:17:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:10.063 ************************************ 00:04:10.063 END TEST spdkcli_tcp 00:04:10.063 ************************************ 00:04:10.063 14:17:50 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:10.063 14:17:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:10.063 14:17:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:10.063 14:17:50 -- common/autotest_common.sh@10 -- # set +x 00:04:10.063 ************************************ 00:04:10.063 START TEST dpdk_mem_utility 00:04:10.063 ************************************ 00:04:10.063 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:10.323 * Looking for test storage... 00:04:10.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:10.323 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:10.323 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:10.323 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:10.323 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.323 14:17:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:10.323 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.323 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:10.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.323 --rc genhtml_branch_coverage=1 00:04:10.323 --rc genhtml_function_coverage=1 00:04:10.323 --rc genhtml_legend=1 00:04:10.323 --rc geninfo_all_blocks=1 00:04:10.323 --rc geninfo_unexecuted_blocks=1 00:04:10.323 00:04:10.323 ' 00:04:10.323 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:10.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.324 --rc genhtml_branch_coverage=1 00:04:10.324 --rc genhtml_function_coverage=1 00:04:10.324 --rc genhtml_legend=1 00:04:10.324 --rc geninfo_all_blocks=1 00:04:10.324 --rc geninfo_unexecuted_blocks=1 00:04:10.324 00:04:10.324 ' 00:04:10.324 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:10.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.324 --rc genhtml_branch_coverage=1 00:04:10.324 --rc genhtml_function_coverage=1 00:04:10.324 --rc genhtml_legend=1 00:04:10.324 --rc geninfo_all_blocks=1 00:04:10.324 --rc geninfo_unexecuted_blocks=1 00:04:10.324 00:04:10.324 ' 00:04:10.324 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:10.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.324 --rc genhtml_branch_coverage=1 00:04:10.324 --rc genhtml_function_coverage=1 00:04:10.324 --rc genhtml_legend=1 00:04:10.324 --rc geninfo_all_blocks=1 00:04:10.324 --rc geninfo_unexecuted_blocks=1 00:04:10.324 00:04:10.324 ' 00:04:10.324 14:17:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:10.324 14:17:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3153286 00:04:10.324 14:17:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3153286 00:04:10.324 14:17:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:10.324 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3153286 ']' 00:04:10.324 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.324 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:10.324 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.324 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:10.324 14:17:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:10.324 [2024-10-14 14:17:50.979294] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:10.324 [2024-10-14 14:17:50.979348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3153286 ] 00:04:10.324 [2024-10-14 14:17:51.041036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.584 [2024-10-14 14:17:51.077436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.155 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:11.155 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:11.155 14:17:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:11.155 14:17:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:11.155 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:11.155 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:11.155 { 00:04:11.155 "filename": "/tmp/spdk_mem_dump.txt" 00:04:11.155 } 00:04:11.155 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:11.155 14:17:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:11.155 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:11.155 1 heaps totaling size 810.000000 MiB 00:04:11.155 size: 810.000000 MiB heap id: 0 00:04:11.155 end heaps---------- 00:04:11.155 9 mempools totaling size 595.772034 MiB 00:04:11.155 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:11.155 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:11.155 size: 92.545471 MiB name: bdev_io_3153286 00:04:11.155 size: 50.003479 MiB name: msgpool_3153286 00:04:11.155 size: 36.509338 MiB name: fsdev_io_3153286 00:04:11.155 size: 21.763794 MiB name: PDU_Pool 00:04:11.155 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:11.155 size: 4.133484 MiB name: evtpool_3153286 00:04:11.155 size: 0.026123 MiB name: Session_Pool 00:04:11.155 end mempools------- 00:04:11.155 6 memzones totaling size 4.142822 MiB 00:04:11.155 size: 1.000366 MiB name: RG_ring_0_3153286 00:04:11.155 size: 1.000366 MiB name: RG_ring_1_3153286 00:04:11.155 size: 1.000366 MiB name: RG_ring_4_3153286 00:04:11.155 size: 1.000366 MiB name: RG_ring_5_3153286 00:04:11.156 size: 0.125366 MiB name: RG_ring_2_3153286 00:04:11.156 size: 0.015991 MiB name: RG_ring_3_3153286 00:04:11.156 end memzones------- 00:04:11.156 14:17:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:11.156 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:11.156 list of free elements. size: 10.862488 MiB 00:04:11.156 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:11.156 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:11.156 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:11.156 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:11.156 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:11.156 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:11.156 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:11.156 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:11.156 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:11.156 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:11.156 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:11.156 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:11.156 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:11.156 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:11.156 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:11.156 list of standard malloc elements. size: 199.218628 MiB 00:04:11.156 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:11.156 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:11.156 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:11.156 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:11.156 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:11.156 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:11.156 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:11.156 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:11.156 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:11.156 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:11.156 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:11.156 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:11.156 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:11.156 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:11.156 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:11.156 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:11.156 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:11.156 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:11.156 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:11.156 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:11.156 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:11.156 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:11.156 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:11.156 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:11.156 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:11.156 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:11.156 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:11.156 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:11.156 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:11.156 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:11.156 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:11.156 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:11.156 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:11.156 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:11.156 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:11.156 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:11.156 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:11.156 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:11.156 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:11.156 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:11.156 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:11.156 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:11.156 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:11.156 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:11.156 list of memzone associated elements. size: 599.918884 MiB 00:04:11.156 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:11.156 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:11.156 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:11.156 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:11.156 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:11.156 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3153286_0 00:04:11.156 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:11.156 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3153286_0 00:04:11.156 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:11.156 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3153286_0 00:04:11.156 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:11.156 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:11.156 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:11.156 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:11.156 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:11.156 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3153286_0 00:04:11.156 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:11.156 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3153286 00:04:11.156 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:11.156 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3153286 00:04:11.156 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:11.156 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:11.156 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:11.156 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:11.156 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:11.156 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:11.156 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:11.156 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:11.156 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:11.156 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3153286 00:04:11.156 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:11.156 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3153286 00:04:11.156 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:11.156 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3153286 00:04:11.156 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:11.156 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3153286 00:04:11.156 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:11.156 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3153286 00:04:11.156 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:11.156 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3153286 00:04:11.156 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:11.156 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:11.156 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:11.156 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:11.156 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:11.156 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:11.156 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:11.156 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3153286 00:04:11.156 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:11.156 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3153286 00:04:11.156 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:11.156 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:11.156 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:11.156 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:11.156 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:11.156 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3153286 00:04:11.156 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:11.156 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:11.156 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:11.156 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3153286 00:04:11.156 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:11.156 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3153286 00:04:11.156 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:11.156 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3153286 00:04:11.156 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:11.156 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:11.156 14:17:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:11.156 14:17:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3153286 00:04:11.156 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3153286 ']' 00:04:11.156 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3153286 00:04:11.156 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:11.156 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:11.156 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3153286 00:04:11.418 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:11.418 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:11.418 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3153286' 00:04:11.418 killing process with pid 3153286 00:04:11.418 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3153286 00:04:11.418 14:17:51 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3153286 00:04:11.418 00:04:11.418 real 0m1.407s 00:04:11.418 user 0m1.489s 00:04:11.418 sys 0m0.398s 00:04:11.418 14:17:52 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:11.418 14:17:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:11.418 ************************************ 00:04:11.418 END TEST dpdk_mem_utility 00:04:11.418 ************************************ 00:04:11.679 14:17:52 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:11.679 14:17:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.679 14:17:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.679 14:17:52 -- common/autotest_common.sh@10 -- # set +x 00:04:11.679 ************************************ 00:04:11.679 START TEST event 00:04:11.679 ************************************ 00:04:11.679 14:17:52 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:11.679 * Looking for test storage... 00:04:11.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:11.679 14:17:52 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:11.679 14:17:52 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:11.679 14:17:52 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:11.679 14:17:52 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:11.679 14:17:52 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.679 14:17:52 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.679 14:17:52 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.679 14:17:52 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.679 14:17:52 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.679 14:17:52 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.679 14:17:52 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.679 14:17:52 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.679 14:17:52 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.679 14:17:52 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.679 14:17:52 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.679 14:17:52 event -- scripts/common.sh@344 -- # case "$op" in 00:04:11.679 14:17:52 event -- scripts/common.sh@345 -- # : 1 00:04:11.679 14:17:52 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.679 14:17:52 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.679 14:17:52 event -- scripts/common.sh@365 -- # decimal 1 00:04:11.679 14:17:52 event -- scripts/common.sh@353 -- # local d=1 00:04:11.679 14:17:52 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.679 14:17:52 event -- scripts/common.sh@355 -- # echo 1 00:04:11.679 14:17:52 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.679 14:17:52 event -- scripts/common.sh@366 -- # decimal 2 00:04:11.679 14:17:52 event -- scripts/common.sh@353 -- # local d=2 00:04:11.679 14:17:52 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.679 14:17:52 event -- scripts/common.sh@355 -- # echo 2 00:04:11.679 14:17:52 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.679 14:17:52 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.679 14:17:52 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.679 14:17:52 event -- scripts/common.sh@368 -- # return 0 00:04:11.679 14:17:52 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.679 14:17:52 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:11.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.679 --rc genhtml_branch_coverage=1 00:04:11.679 --rc genhtml_function_coverage=1 00:04:11.679 --rc genhtml_legend=1 00:04:11.679 --rc geninfo_all_blocks=1 00:04:11.679 --rc geninfo_unexecuted_blocks=1 00:04:11.679 00:04:11.679 ' 00:04:11.679 14:17:52 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:11.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.679 --rc genhtml_branch_coverage=1 00:04:11.679 --rc genhtml_function_coverage=1 00:04:11.679 --rc genhtml_legend=1 00:04:11.679 --rc geninfo_all_blocks=1 00:04:11.679 --rc geninfo_unexecuted_blocks=1 00:04:11.679 00:04:11.679 ' 00:04:11.679 14:17:52 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:11.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.679 --rc genhtml_branch_coverage=1 00:04:11.679 --rc genhtml_function_coverage=1 00:04:11.679 --rc genhtml_legend=1 00:04:11.679 --rc geninfo_all_blocks=1 00:04:11.679 --rc geninfo_unexecuted_blocks=1 00:04:11.679 00:04:11.679 ' 00:04:11.679 14:17:52 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:11.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.679 --rc genhtml_branch_coverage=1 00:04:11.679 --rc genhtml_function_coverage=1 00:04:11.679 --rc genhtml_legend=1 00:04:11.679 --rc geninfo_all_blocks=1 00:04:11.679 --rc geninfo_unexecuted_blocks=1 00:04:11.679 00:04:11.679 ' 00:04:11.679 14:17:52 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:11.679 14:17:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:11.679 14:17:52 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:11.679 14:17:52 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:11.679 14:17:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.679 14:17:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:11.939 ************************************ 00:04:11.939 START TEST event_perf 00:04:11.939 ************************************ 00:04:11.939 14:17:52 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:11.939 Running I/O for 1 seconds...[2024-10-14 14:17:52.457953] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:11.939 [2024-10-14 14:17:52.458055] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3153687 ] 00:04:11.939 [2024-10-14 14:17:52.528453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:11.939 [2024-10-14 14:17:52.574824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.939 [2024-10-14 14:17:52.574942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:11.939 [2024-10-14 14:17:52.575120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.939 [2024-10-14 14:17:52.575120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.878 Running I/O for 1 seconds... 00:04:12.878 lcore 0: 180331 00:04:12.878 lcore 1: 180332 00:04:12.878 lcore 2: 180328 00:04:12.878 lcore 3: 180331 00:04:12.878 done. 00:04:12.878 00:04:12.878 real 0m1.173s 00:04:12.878 user 0m4.090s 00:04:12.878 sys 0m0.079s 00:04:12.878 14:17:53 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.878 14:17:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:12.878 ************************************ 00:04:12.878 END TEST event_perf 00:04:12.878 ************************************ 00:04:13.138 14:17:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:13.138 14:17:53 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:13.138 14:17:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.138 14:17:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:13.138 ************************************ 00:04:13.138 START TEST event_reactor 00:04:13.138 ************************************ 00:04:13.138 14:17:53 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:13.138 [2024-10-14 14:17:53.697812] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:13.138 [2024-10-14 14:17:53.697922] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154040 ] 00:04:13.138 [2024-10-14 14:17:53.762105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.138 [2024-10-14 14:17:53.797394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.522 test_start 00:04:14.522 oneshot 00:04:14.522 tick 100 00:04:14.522 tick 100 00:04:14.522 tick 250 00:04:14.522 tick 100 00:04:14.522 tick 100 00:04:14.522 tick 250 00:04:14.522 tick 100 00:04:14.522 tick 500 00:04:14.522 tick 100 00:04:14.522 tick 100 00:04:14.522 tick 250 00:04:14.522 tick 100 00:04:14.522 tick 100 00:04:14.522 test_end 00:04:14.522 00:04:14.522 real 0m1.153s 00:04:14.522 user 0m1.086s 00:04:14.522 sys 0m0.063s 00:04:14.522 14:17:54 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.522 14:17:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:14.522 ************************************ 00:04:14.522 END TEST event_reactor 00:04:14.522 ************************************ 00:04:14.522 14:17:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:14.522 14:17:54 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:14.522 14:17:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.522 14:17:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.522 ************************************ 00:04:14.522 START TEST event_reactor_perf 00:04:14.522 ************************************ 00:04:14.522 14:17:54 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:14.522 [2024-10-14 14:17:54.924460] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:14.522 [2024-10-14 14:17:54.924565] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154353 ] 00:04:14.522 [2024-10-14 14:17:54.989488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.522 [2024-10-14 14:17:55.027225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.463 test_start 00:04:15.463 test_end 00:04:15.463 Performance: 369701 events per second 00:04:15.463 00:04:15.463 real 0m1.155s 00:04:15.463 user 0m1.085s 00:04:15.463 sys 0m0.066s 00:04:15.463 14:17:56 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.463 14:17:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:15.463 ************************************ 00:04:15.463 END TEST event_reactor_perf 00:04:15.463 ************************************ 00:04:15.463 14:17:56 event -- event/event.sh@49 -- # uname -s 00:04:15.463 14:17:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:15.463 14:17:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:15.463 14:17:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.463 14:17:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.463 14:17:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:15.463 ************************************ 00:04:15.463 START TEST event_scheduler 00:04:15.463 ************************************ 00:04:15.463 14:17:56 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:15.724 * Looking for test storage... 00:04:15.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:15.724 14:17:56 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:15.724 14:17:56 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:15.724 14:17:56 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:15.724 14:17:56 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.724 14:17:56 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:15.724 14:17:56 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.724 14:17:56 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:15.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.724 --rc genhtml_branch_coverage=1 00:04:15.724 --rc genhtml_function_coverage=1 00:04:15.724 --rc genhtml_legend=1 00:04:15.724 --rc geninfo_all_blocks=1 00:04:15.724 --rc geninfo_unexecuted_blocks=1 00:04:15.724 00:04:15.724 ' 00:04:15.724 14:17:56 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:15.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.724 --rc genhtml_branch_coverage=1 00:04:15.724 --rc genhtml_function_coverage=1 00:04:15.724 --rc genhtml_legend=1 00:04:15.724 --rc geninfo_all_blocks=1 00:04:15.724 --rc geninfo_unexecuted_blocks=1 00:04:15.724 00:04:15.724 ' 00:04:15.724 14:17:56 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:15.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.724 --rc genhtml_branch_coverage=1 00:04:15.724 --rc genhtml_function_coverage=1 00:04:15.725 --rc genhtml_legend=1 00:04:15.725 --rc geninfo_all_blocks=1 00:04:15.725 --rc geninfo_unexecuted_blocks=1 00:04:15.725 00:04:15.725 ' 00:04:15.725 14:17:56 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:15.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.725 --rc genhtml_branch_coverage=1 00:04:15.725 --rc genhtml_function_coverage=1 00:04:15.725 --rc genhtml_legend=1 00:04:15.725 --rc geninfo_all_blocks=1 00:04:15.725 --rc geninfo_unexecuted_blocks=1 00:04:15.725 00:04:15.725 ' 00:04:15.725 14:17:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:15.725 14:17:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3154615 00:04:15.725 14:17:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.725 14:17:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:15.725 14:17:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3154615 00:04:15.725 14:17:56 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3154615 ']' 00:04:15.725 14:17:56 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.725 14:17:56 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:15.725 14:17:56 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.725 14:17:56 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:15.725 14:17:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.725 [2024-10-14 14:17:56.388950] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:15.725 [2024-10-14 14:17:56.389025] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154615 ] 00:04:15.725 [2024-10-14 14:17:56.446403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:15.986 [2024-10-14 14:17:56.487701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.986 [2024-10-14 14:17:56.487859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.986 [2024-10-14 14:17:56.488014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:15.986 [2024-10-14 14:17:56.488015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:15.986 14:17:56 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:15.986 14:17:56 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:15.986 14:17:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:15.986 14:17:56 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.986 14:17:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.986 [2024-10-14 14:17:56.528480] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:15.986 [2024-10-14 14:17:56.528494] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:15.986 [2024-10-14 14:17:56.528502] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:15.986 [2024-10-14 14:17:56.528506] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:15.986 [2024-10-14 14:17:56.528510] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:15.986 14:17:56 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.986 14:17:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:15.986 14:17:56 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.986 14:17:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.986 [2024-10-14 14:17:56.585844] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:15.986 14:17:56 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.986 14:17:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:15.986 14:17:56 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.986 14:17:56 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.986 14:17:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.986 ************************************ 00:04:15.986 START TEST scheduler_create_thread 00:04:15.986 ************************************ 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.986 2 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.986 3 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.986 4 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.986 5 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.986 6 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.986 7 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.986 8 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.986 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.247 9 00:04:16.247 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.247 14:17:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:16.247 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.247 14:17:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.508 10 00:04:16.508 14:17:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.508 14:17:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:16.508 14:17:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.508 14:17:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:17.890 14:17:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.890 14:17:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:17.890 14:17:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:17.890 14:17:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:17.890 14:17:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:18.830 14:17:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:18.830 14:17:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:18.830 14:17:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:18.830 14:17:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:19.401 14:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:19.401 14:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:19.401 14:18:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:19.401 14:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:19.401 14:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.340 14:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.340 00:04:20.340 real 0m4.224s 00:04:20.340 user 0m0.022s 00:04:20.340 sys 0m0.009s 00:04:20.340 14:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.340 14:18:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:20.340 ************************************ 00:04:20.340 END TEST scheduler_create_thread 00:04:20.340 ************************************ 00:04:20.340 14:18:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:20.340 14:18:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3154615 00:04:20.340 14:18:00 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3154615 ']' 00:04:20.340 14:18:00 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3154615 00:04:20.340 14:18:00 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:20.340 14:18:00 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:20.340 14:18:00 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3154615 00:04:20.340 14:18:00 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:20.340 14:18:00 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:20.340 14:18:00 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3154615' 00:04:20.340 killing process with pid 3154615 00:04:20.340 14:18:00 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3154615 00:04:20.340 14:18:00 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3154615 00:04:20.600 [2024-10-14 14:18:01.227264] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:20.861 00:04:20.861 real 0m5.249s 00:04:20.861 user 0m11.162s 00:04:20.861 sys 0m0.350s 00:04:20.861 14:18:01 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.861 14:18:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:20.861 ************************************ 00:04:20.861 END TEST event_scheduler 00:04:20.861 ************************************ 00:04:20.861 14:18:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:20.861 14:18:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:20.861 14:18:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.861 14:18:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.861 14:18:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.861 ************************************ 00:04:20.861 START TEST app_repeat 00:04:20.861 ************************************ 00:04:20.861 14:18:01 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:20.861 14:18:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.861 14:18:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.861 14:18:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:20.861 14:18:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.861 14:18:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:20.861 14:18:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:20.861 14:18:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:20.861 14:18:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3155789 00:04:20.861 14:18:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.861 14:18:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:20.861 14:18:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3155789' 00:04:20.861 Process app_repeat pid: 3155789 00:04:20.861 14:18:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:20.861 14:18:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:20.861 spdk_app_start Round 0 00:04:20.861 14:18:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3155789 /var/tmp/spdk-nbd.sock 00:04:20.861 14:18:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3155789 ']' 00:04:20.861 14:18:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:20.861 14:18:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.861 14:18:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:20.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:20.861 14:18:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.861 14:18:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:20.861 [2024-10-14 14:18:01.507912] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:20.861 [2024-10-14 14:18:01.508010] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155789 ] 00:04:20.861 [2024-10-14 14:18:01.575050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.121 [2024-10-14 14:18:01.618409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.121 [2024-10-14 14:18:01.618410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.121 14:18:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:21.121 14:18:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:21.121 14:18:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:21.121 Malloc0 00:04:21.381 14:18:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:21.381 Malloc1 00:04:21.381 14:18:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.381 14:18:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:21.641 /dev/nbd0 00:04:21.641 14:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:21.641 14:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:21.641 14:18:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:21.641 14:18:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:21.641 14:18:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:21.641 14:18:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:21.641 14:18:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:21.641 14:18:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:21.641 14:18:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:21.641 14:18:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:21.641 14:18:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:21.641 1+0 records in 00:04:21.641 1+0 records out 00:04:21.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 9.4259e-05 s, 43.5 MB/s 00:04:21.641 14:18:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:21.641 14:18:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:21.641 14:18:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:21.641 14:18:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:21.641 14:18:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:21.641 14:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:21.641 14:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.641 14:18:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:21.902 /dev/nbd1 00:04:21.903 14:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:21.903 14:18:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:21.903 14:18:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:21.903 14:18:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:21.903 14:18:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:21.903 14:18:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:21.903 14:18:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:21.903 14:18:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:21.903 14:18:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:21.903 14:18:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:21.903 14:18:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:21.903 1+0 records in 00:04:21.903 1+0 records out 00:04:21.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202008 s, 20.3 MB/s 00:04:21.903 14:18:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:21.903 14:18:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:21.903 14:18:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:21.903 14:18:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:21.903 14:18:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:21.903 14:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:21.903 14:18:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.903 14:18:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:21.903 14:18:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.903 14:18:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:22.164 { 00:04:22.164 "nbd_device": "/dev/nbd0", 00:04:22.164 "bdev_name": "Malloc0" 00:04:22.164 }, 00:04:22.164 { 00:04:22.164 "nbd_device": "/dev/nbd1", 00:04:22.164 "bdev_name": "Malloc1" 00:04:22.164 } 00:04:22.164 ]' 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:22.164 { 00:04:22.164 "nbd_device": "/dev/nbd0", 00:04:22.164 "bdev_name": "Malloc0" 00:04:22.164 }, 00:04:22.164 { 00:04:22.164 "nbd_device": "/dev/nbd1", 00:04:22.164 "bdev_name": "Malloc1" 00:04:22.164 } 00:04:22.164 ]' 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:22.164 /dev/nbd1' 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:22.164 /dev/nbd1' 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:22.164 256+0 records in 00:04:22.164 256+0 records out 00:04:22.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127229 s, 82.4 MB/s 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:22.164 256+0 records in 00:04:22.164 256+0 records out 00:04:22.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203397 s, 51.6 MB/s 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:22.164 256+0 records in 00:04:22.164 256+0 records out 00:04:22.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023356 s, 44.9 MB/s 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:22.164 14:18:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:22.424 14:18:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:22.424 14:18:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:22.424 14:18:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:22.424 14:18:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:22.424 14:18:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:22.424 14:18:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:22.424 14:18:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:22.424 14:18:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:22.424 14:18:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:22.424 14:18:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:22.685 14:18:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:22.685 14:18:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:22.945 14:18:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:23.205 [2024-10-14 14:18:03.695926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.205 [2024-10-14 14:18:03.733538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.205 [2024-10-14 14:18:03.733540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.205 [2024-10-14 14:18:03.765042] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:23.205 [2024-10-14 14:18:03.765100] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:26.507 14:18:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:26.507 14:18:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:26.507 spdk_app_start Round 1 00:04:26.507 14:18:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3155789 /var/tmp/spdk-nbd.sock 00:04:26.507 14:18:06 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3155789 ']' 00:04:26.507 14:18:06 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:26.507 14:18:06 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:26.507 14:18:06 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:26.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:26.507 14:18:06 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:26.507 14:18:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:26.507 14:18:06 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:26.507 14:18:06 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:26.507 14:18:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.507 Malloc0 00:04:26.507 14:18:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.507 Malloc1 00:04:26.507 14:18:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.507 14:18:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:26.768 /dev/nbd0 00:04:26.768 14:18:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:26.768 14:18:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:26.768 14:18:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:26.768 14:18:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:26.768 14:18:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:26.768 14:18:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:26.768 14:18:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:26.768 14:18:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:26.768 14:18:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:26.768 14:18:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:26.768 14:18:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.768 1+0 records in 00:04:26.768 1+0 records out 00:04:26.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240645 s, 17.0 MB/s 00:04:26.768 14:18:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.768 14:18:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:26.768 14:18:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.768 14:18:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:26.768 14:18:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:26.768 14:18:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.768 14:18:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.768 14:18:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:27.029 /dev/nbd1 00:04:27.029 14:18:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:27.029 14:18:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:27.029 14:18:07 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:27.029 14:18:07 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:27.029 14:18:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:27.029 14:18:07 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:27.029 14:18:07 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:27.029 14:18:07 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:27.029 14:18:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:27.029 14:18:07 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:27.029 14:18:07 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:27.029 1+0 records in 00:04:27.029 1+0 records out 00:04:27.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263201 s, 15.6 MB/s 00:04:27.029 14:18:07 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.029 14:18:07 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:27.029 14:18:07 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.029 14:18:07 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:27.029 14:18:07 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:27.029 14:18:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:27.029 14:18:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.029 14:18:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.029 14:18:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.029 14:18:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.029 14:18:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:27.029 { 00:04:27.029 "nbd_device": "/dev/nbd0", 00:04:27.029 "bdev_name": "Malloc0" 00:04:27.029 }, 00:04:27.029 { 00:04:27.029 "nbd_device": "/dev/nbd1", 00:04:27.029 "bdev_name": "Malloc1" 00:04:27.029 } 00:04:27.029 ]' 00:04:27.029 14:18:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:27.030 { 00:04:27.030 "nbd_device": "/dev/nbd0", 00:04:27.030 "bdev_name": "Malloc0" 00:04:27.030 }, 00:04:27.030 { 00:04:27.030 "nbd_device": "/dev/nbd1", 00:04:27.030 "bdev_name": "Malloc1" 00:04:27.030 } 00:04:27.030 ]' 00:04:27.030 14:18:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:27.290 /dev/nbd1' 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:27.290 /dev/nbd1' 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:27.290 256+0 records in 00:04:27.290 256+0 records out 00:04:27.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124058 s, 84.5 MB/s 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:27.290 256+0 records in 00:04:27.290 256+0 records out 00:04:27.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196935 s, 53.2 MB/s 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:27.290 256+0 records in 00:04:27.290 256+0 records out 00:04:27.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184693 s, 56.8 MB/s 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:27.290 14:18:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:27.291 14:18:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.291 14:18:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:27.291 14:18:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.291 14:18:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:27.291 14:18:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.291 14:18:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:27.291 14:18:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.291 14:18:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.291 14:18:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:27.291 14:18:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:27.291 14:18:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.291 14:18:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.551 14:18:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:27.552 14:18:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:27.552 14:18:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.552 14:18:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.552 14:18:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.552 14:18:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.812 14:18:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:27.812 14:18:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:27.812 14:18:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.812 14:18:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:27.812 14:18:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:27.812 14:18:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.812 14:18:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:27.812 14:18:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:27.812 14:18:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:27.812 14:18:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:27.812 14:18:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:27.812 14:18:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:27.812 14:18:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:28.073 14:18:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:28.073 [2024-10-14 14:18:08.755624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.073 [2024-10-14 14:18:08.792895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.073 [2024-10-14 14:18:08.792897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.334 [2024-10-14 14:18:08.825113] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:28.334 [2024-10-14 14:18:08.825144] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:31.634 14:18:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:31.634 14:18:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:31.634 spdk_app_start Round 2 00:04:31.634 14:18:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3155789 /var/tmp/spdk-nbd.sock 00:04:31.634 14:18:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3155789 ']' 00:04:31.634 14:18:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:31.634 14:18:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:31.634 14:18:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:31.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:31.634 14:18:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:31.634 14:18:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:31.634 14:18:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.634 14:18:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:31.634 14:18:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.634 Malloc0 00:04:31.634 14:18:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.634 Malloc1 00:04:31.634 14:18:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.634 14:18:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:31.894 /dev/nbd0 00:04:31.894 14:18:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:31.894 14:18:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.894 1+0 records in 00:04:31.894 1+0 records out 00:04:31.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235461 s, 17.4 MB/s 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:31.894 14:18:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.894 14:18:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.894 14:18:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:31.894 /dev/nbd1 00:04:31.894 14:18:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:31.894 14:18:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:31.894 14:18:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.894 1+0 records in 00:04:31.894 1+0 records out 00:04:31.894 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238617 s, 17.2 MB/s 00:04:32.154 14:18:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.154 14:18:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:32.154 14:18:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.154 14:18:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:32.154 14:18:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:32.154 { 00:04:32.154 "nbd_device": "/dev/nbd0", 00:04:32.154 "bdev_name": "Malloc0" 00:04:32.154 }, 00:04:32.154 { 00:04:32.154 "nbd_device": "/dev/nbd1", 00:04:32.154 "bdev_name": "Malloc1" 00:04:32.154 } 00:04:32.154 ]' 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:32.154 { 00:04:32.154 "nbd_device": "/dev/nbd0", 00:04:32.154 "bdev_name": "Malloc0" 00:04:32.154 }, 00:04:32.154 { 00:04:32.154 "nbd_device": "/dev/nbd1", 00:04:32.154 "bdev_name": "Malloc1" 00:04:32.154 } 00:04:32.154 ]' 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:32.154 /dev/nbd1' 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:32.154 /dev/nbd1' 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:32.154 14:18:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:32.155 14:18:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:32.155 14:18:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.155 14:18:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.155 14:18:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:32.155 14:18:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.155 14:18:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:32.155 14:18:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:32.155 256+0 records in 00:04:32.155 256+0 records out 00:04:32.155 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012739 s, 82.3 MB/s 00:04:32.155 14:18:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.155 14:18:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:32.415 256+0 records in 00:04:32.415 256+0 records out 00:04:32.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166506 s, 63.0 MB/s 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:32.415 256+0 records in 00:04:32.415 256+0 records out 00:04:32.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028967 s, 36.2 MB/s 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.415 14:18:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:32.415 14:18:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:32.415 14:18:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:32.415 14:18:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:32.415 14:18:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.415 14:18:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.415 14:18:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:32.415 14:18:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.415 14:18:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.415 14:18:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.415 14:18:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:32.675 14:18:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:32.675 14:18:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:32.675 14:18:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:32.675 14:18:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.675 14:18:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.675 14:18:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:32.675 14:18:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.675 14:18:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.675 14:18:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.675 14:18:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.675 14:18:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.934 14:18:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:32.934 14:18:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:32.934 14:18:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.934 14:18:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:32.934 14:18:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:32.934 14:18:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.934 14:18:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:32.934 14:18:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:32.934 14:18:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:32.934 14:18:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:32.934 14:18:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:32.934 14:18:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:32.934 14:18:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:33.193 14:18:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:33.193 [2024-10-14 14:18:13.846439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.193 [2024-10-14 14:18:13.884129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.193 [2024-10-14 14:18:13.884290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.193 [2024-10-14 14:18:13.915784] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:33.193 [2024-10-14 14:18:13.915820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:36.491 14:18:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3155789 /var/tmp/spdk-nbd.sock 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3155789 ']' 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:36.492 14:18:16 event.app_repeat -- event/event.sh@39 -- # killprocess 3155789 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3155789 ']' 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3155789 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3155789 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3155789' 00:04:36.492 killing process with pid 3155789 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3155789 00:04:36.492 14:18:16 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3155789 00:04:36.492 spdk_app_start is called in Round 0. 00:04:36.492 Shutdown signal received, stop current app iteration 00:04:36.492 Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 reinitialization... 00:04:36.492 spdk_app_start is called in Round 1. 00:04:36.492 Shutdown signal received, stop current app iteration 00:04:36.492 Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 reinitialization... 00:04:36.492 spdk_app_start is called in Round 2. 00:04:36.492 Shutdown signal received, stop current app iteration 00:04:36.492 Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 reinitialization... 00:04:36.492 spdk_app_start is called in Round 3. 00:04:36.492 Shutdown signal received, stop current app iteration 00:04:36.492 14:18:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:36.492 14:18:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:36.492 00:04:36.492 real 0m15.609s 00:04:36.492 user 0m34.007s 00:04:36.492 sys 0m2.270s 00:04:36.492 14:18:17 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.492 14:18:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.492 ************************************ 00:04:36.492 END TEST app_repeat 00:04:36.492 ************************************ 00:04:36.492 14:18:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:36.492 14:18:17 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:36.492 14:18:17 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.492 14:18:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.492 14:18:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.492 ************************************ 00:04:36.492 START TEST cpu_locks 00:04:36.492 ************************************ 00:04:36.492 14:18:17 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:36.752 * Looking for test storage... 00:04:36.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:36.752 14:18:17 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:36.752 14:18:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:36.752 14:18:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:36.752 14:18:17 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.752 14:18:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.753 14:18:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:36.753 14:18:17 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.753 14:18:17 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:36.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.753 --rc genhtml_branch_coverage=1 00:04:36.753 --rc genhtml_function_coverage=1 00:04:36.753 --rc genhtml_legend=1 00:04:36.753 --rc geninfo_all_blocks=1 00:04:36.753 --rc geninfo_unexecuted_blocks=1 00:04:36.753 00:04:36.753 ' 00:04:36.753 14:18:17 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:36.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.753 --rc genhtml_branch_coverage=1 00:04:36.753 --rc genhtml_function_coverage=1 00:04:36.753 --rc genhtml_legend=1 00:04:36.753 --rc geninfo_all_blocks=1 00:04:36.753 --rc geninfo_unexecuted_blocks=1 00:04:36.753 00:04:36.753 ' 00:04:36.753 14:18:17 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:36.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.753 --rc genhtml_branch_coverage=1 00:04:36.753 --rc genhtml_function_coverage=1 00:04:36.753 --rc genhtml_legend=1 00:04:36.753 --rc geninfo_all_blocks=1 00:04:36.753 --rc geninfo_unexecuted_blocks=1 00:04:36.753 00:04:36.753 ' 00:04:36.753 14:18:17 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:36.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.753 --rc genhtml_branch_coverage=1 00:04:36.753 --rc genhtml_function_coverage=1 00:04:36.753 --rc genhtml_legend=1 00:04:36.753 --rc geninfo_all_blocks=1 00:04:36.753 --rc geninfo_unexecuted_blocks=1 00:04:36.753 00:04:36.753 ' 00:04:36.753 14:18:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:36.753 14:18:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:36.753 14:18:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:36.753 14:18:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:36.753 14:18:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.753 14:18:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.753 14:18:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.753 ************************************ 00:04:36.753 START TEST default_locks 00:04:36.753 ************************************ 00:04:36.753 14:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:36.753 14:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3159659 00:04:36.753 14:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3159659 00:04:36.753 14:18:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.753 14:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3159659 ']' 00:04:36.753 14:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.753 14:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.753 14:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.753 14:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.753 14:18:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.753 [2024-10-14 14:18:17.460099] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:36.753 [2024-10-14 14:18:17.460149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159659 ] 00:04:37.013 [2024-10-14 14:18:17.521382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.013 [2024-10-14 14:18:17.557589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.583 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.583 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:37.583 14:18:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3159659 00:04:37.583 14:18:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3159659 00:04:37.583 14:18:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:38.153 lslocks: write error 00:04:38.153 14:18:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3159659 00:04:38.153 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3159659 ']' 00:04:38.153 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3159659 00:04:38.153 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:38.153 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:38.153 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3159659 00:04:38.153 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:38.153 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:38.153 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3159659' 00:04:38.153 killing process with pid 3159659 00:04:38.153 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3159659 00:04:38.153 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3159659 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3159659 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3159659 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3159659 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3159659 ']' 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3159659) - No such process 00:04:38.413 ERROR: process (pid: 3159659) is no longer running 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:38.413 00:04:38.413 real 0m1.583s 00:04:38.413 user 0m1.712s 00:04:38.413 sys 0m0.529s 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.413 14:18:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.413 ************************************ 00:04:38.413 END TEST default_locks 00:04:38.413 ************************************ 00:04:38.413 14:18:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:38.413 14:18:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.413 14:18:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.413 14:18:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.413 ************************************ 00:04:38.413 START TEST default_locks_via_rpc 00:04:38.413 ************************************ 00:04:38.413 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:38.413 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3160031 00:04:38.413 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3160031 00:04:38.413 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.413 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3160031 ']' 00:04:38.413 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.413 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.413 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.413 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.413 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.413 [2024-10-14 14:18:19.105244] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:38.413 [2024-10-14 14:18:19.105292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160031 ] 00:04:38.673 [2024-10-14 14:18:19.167326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.673 [2024-10-14 14:18:19.204639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3160031 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3160031 00:04:39.243 14:18:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.814 14:18:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3160031 00:04:39.814 14:18:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3160031 ']' 00:04:39.814 14:18:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3160031 00:04:39.814 14:18:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:39.814 14:18:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:39.814 14:18:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3160031 00:04:39.814 14:18:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:39.814 14:18:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:39.814 14:18:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3160031' 00:04:39.814 killing process with pid 3160031 00:04:39.814 14:18:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3160031 00:04:39.814 14:18:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3160031 00:04:40.075 00:04:40.075 real 0m1.537s 00:04:40.075 user 0m1.681s 00:04:40.075 sys 0m0.478s 00:04:40.075 14:18:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.075 14:18:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.075 ************************************ 00:04:40.075 END TEST default_locks_via_rpc 00:04:40.075 ************************************ 00:04:40.075 14:18:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:40.075 14:18:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.075 14:18:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.075 14:18:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.075 ************************************ 00:04:40.075 START TEST non_locking_app_on_locked_coremask 00:04:40.075 ************************************ 00:04:40.075 14:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:40.075 14:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3160389 00:04:40.075 14:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3160389 /var/tmp/spdk.sock 00:04:40.075 14:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.075 14:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3160389 ']' 00:04:40.075 14:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.075 14:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:40.075 14:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.075 14:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:40.075 14:18:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.075 [2024-10-14 14:18:20.719432] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:40.075 [2024-10-14 14:18:20.719487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160389 ] 00:04:40.075 [2024-10-14 14:18:20.783823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.336 [2024-10-14 14:18:20.826846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.908 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.908 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:40.908 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:40.908 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3160469 00:04:40.908 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3160469 /var/tmp/spdk2.sock 00:04:40.908 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3160469 ']' 00:04:40.908 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.908 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:40.908 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.908 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:40.908 14:18:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.908 [2024-10-14 14:18:21.541429] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:40.908 [2024-10-14 14:18:21.541478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160469 ] 00:04:40.908 [2024-10-14 14:18:21.631479] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:40.908 [2024-10-14 14:18:21.631508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.168 [2024-10-14 14:18:21.708086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.739 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.739 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:41.739 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3160389 00:04:41.739 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3160389 00:04:41.739 14:18:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:42.311 lslocks: write error 00:04:42.311 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3160389 00:04:42.311 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3160389 ']' 00:04:42.311 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3160389 00:04:42.311 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:42.311 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.311 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3160389 00:04:42.572 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.572 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.572 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3160389' 00:04:42.572 killing process with pid 3160389 00:04:42.572 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3160389 00:04:42.572 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3160389 00:04:42.833 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3160469 00:04:42.833 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3160469 ']' 00:04:42.833 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3160469 00:04:42.833 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:42.833 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.833 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3160469 00:04:42.833 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.833 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.094 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3160469' 00:04:43.094 killing process with pid 3160469 00:04:43.094 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3160469 00:04:43.094 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3160469 00:04:43.094 00:04:43.094 real 0m3.108s 00:04:43.094 user 0m3.409s 00:04:43.094 sys 0m0.951s 00:04:43.094 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.094 14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.094 ************************************ 00:04:43.094 END TEST non_locking_app_on_locked_coremask 00:04:43.094 ************************************ 00:04:43.094 14:18:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:43.094 14:18:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.094 14:18:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.094 14:18:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.355 ************************************ 00:04:43.355 START TEST locking_app_on_unlocked_coremask 00:04:43.355 ************************************ 00:04:43.355 14:18:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:04:43.355 14:18:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3161102 00:04:43.355 14:18:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3161102 /var/tmp/spdk.sock 00:04:43.355 14:18:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:43.355 14:18:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3161102 ']' 00:04:43.355 14:18:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.355 14:18:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.355 14:18:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.355 14:18:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.355 14:18:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.355 [2024-10-14 14:18:23.900781] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:43.355 [2024-10-14 14:18:23.900835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161102 ] 00:04:43.355 [2024-10-14 14:18:23.965988] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:43.355 [2024-10-14 14:18:23.966021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.355 [2024-10-14 14:18:24.008839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.298 14:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.298 14:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:44.298 14:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:44.298 14:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3161116 00:04:44.298 14:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3161116 /var/tmp/spdk2.sock 00:04:44.298 14:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3161116 ']' 00:04:44.298 14:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.298 14:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.298 14:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.298 14:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.298 14:18:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:44.298 [2024-10-14 14:18:24.726176] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:44.298 [2024-10-14 14:18:24.726224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161116 ] 00:04:44.298 [2024-10-14 14:18:24.816501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.298 [2024-10-14 14:18:24.888889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.868 14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.868 14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:44.868 14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3161116 00:04:44.868 14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3161116 00:04:44.868 14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.440 lslocks: write error 00:04:45.440 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3161102 00:04:45.440 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3161102 ']' 00:04:45.440 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3161102 00:04:45.440 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:45.440 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:45.440 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3161102 00:04:45.440 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:45.440 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:45.440 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3161102' 00:04:45.440 killing process with pid 3161102 00:04:45.440 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3161102 00:04:45.440 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3161102 00:04:46.011 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3161116 00:04:46.011 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3161116 ']' 00:04:46.011 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3161116 00:04:46.011 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:46.011 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:46.011 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3161116 00:04:46.011 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:46.011 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:46.011 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3161116' 00:04:46.011 killing process with pid 3161116 00:04:46.011 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3161116 00:04:46.011 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3161116 00:04:46.272 00:04:46.272 real 0m2.940s 00:04:46.272 user 0m3.231s 00:04:46.272 sys 0m0.887s 00:04:46.272 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.272 14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.272 ************************************ 00:04:46.272 END TEST locking_app_on_unlocked_coremask 00:04:46.272 ************************************ 00:04:46.272 14:18:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:46.272 14:18:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.272 14:18:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.272 14:18:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.272 ************************************ 00:04:46.272 START TEST locking_app_on_locked_coremask 00:04:46.272 ************************************ 00:04:46.272 14:18:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:04:46.272 14:18:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3161684 00:04:46.272 14:18:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3161684 /var/tmp/spdk.sock 00:04:46.272 14:18:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.272 14:18:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3161684 ']' 00:04:46.272 14:18:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.272 14:18:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.272 14:18:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.272 14:18:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.272 14:18:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.273 [2024-10-14 14:18:26.912997] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:46.273 [2024-10-14 14:18:26.913052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161684 ] 00:04:46.273 [2024-10-14 14:18:26.976977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.533 [2024-10-14 14:18:27.019069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.104 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.104 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:47.104 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3161826 00:04:47.104 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3161826 /var/tmp/spdk2.sock 00:04:47.105 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:47.105 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:47.105 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3161826 /var/tmp/spdk2.sock 00:04:47.105 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:47.105 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:47.105 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:47.105 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:47.105 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3161826 /var/tmp/spdk2.sock 00:04:47.105 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3161826 ']' 00:04:47.105 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.105 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:47.105 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.105 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:47.105 14:18:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.105 [2024-10-14 14:18:27.763479] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:47.105 [2024-10-14 14:18:27.763532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161826 ] 00:04:47.365 [2024-10-14 14:18:27.855204] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3161684 has claimed it. 00:04:47.365 [2024-10-14 14:18:27.855243] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:47.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3161826) - No such process 00:04:47.937 ERROR: process (pid: 3161826) is no longer running 00:04:47.937 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.937 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:47.937 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:47.937 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:47.937 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:47.937 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:47.937 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3161684 00:04:47.937 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3161684 00:04:47.937 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.198 lslocks: write error 00:04:48.198 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3161684 00:04:48.198 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3161684 ']' 00:04:48.198 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3161684 00:04:48.198 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:48.198 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.198 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3161684 00:04:48.198 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.198 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.198 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3161684' 00:04:48.198 killing process with pid 3161684 00:04:48.198 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3161684 00:04:48.198 14:18:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3161684 00:04:48.458 00:04:48.458 real 0m2.223s 00:04:48.458 user 0m2.522s 00:04:48.458 sys 0m0.586s 00:04:48.458 14:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.458 14:18:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.458 ************************************ 00:04:48.458 END TEST locking_app_on_locked_coremask 00:04:48.458 ************************************ 00:04:48.458 14:18:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:48.458 14:18:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.458 14:18:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.458 14:18:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:48.458 ************************************ 00:04:48.458 START TEST locking_overlapped_coremask 00:04:48.458 ************************************ 00:04:48.458 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:04:48.458 14:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3162185 00:04:48.458 14:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3162185 /var/tmp/spdk.sock 00:04:48.458 14:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:48.458 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3162185 ']' 00:04:48.458 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.458 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.459 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.459 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.459 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.719 [2024-10-14 14:18:29.208363] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:48.719 [2024-10-14 14:18:29.208415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162185 ] 00:04:48.719 [2024-10-14 14:18:29.271740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:48.719 [2024-10-14 14:18:29.313530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.719 [2024-10-14 14:18:29.313650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.719 [2024-10-14 14:18:29.313653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3162267 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3162267 /var/tmp/spdk2.sock 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3162267 /var/tmp/spdk2.sock 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3162267 /var/tmp/spdk2.sock 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3162267 ']' 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:49.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.290 14:18:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.552 [2024-10-14 14:18:30.056423] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:49.552 [2024-10-14 14:18:30.056516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162267 ] 00:04:49.552 [2024-10-14 14:18:30.140070] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3162185 has claimed it. 00:04:49.552 [2024-10-14 14:18:30.140104] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:50.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3162267) - No such process 00:04:50.124 ERROR: process (pid: 3162267) is no longer running 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3162185 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3162185 ']' 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3162185 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3162185 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3162185' 00:04:50.124 killing process with pid 3162185 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3162185 00:04:50.124 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3162185 00:04:50.385 00:04:50.385 real 0m1.777s 00:04:50.385 user 0m5.163s 00:04:50.385 sys 0m0.373s 00:04:50.385 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.385 14:18:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.385 ************************************ 00:04:50.385 END TEST locking_overlapped_coremask 00:04:50.385 ************************************ 00:04:50.385 14:18:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:50.385 14:18:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.385 14:18:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.385 14:18:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.385 ************************************ 00:04:50.385 START TEST locking_overlapped_coremask_via_rpc 00:04:50.385 ************************************ 00:04:50.385 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:04:50.385 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:50.385 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3162563 00:04:50.385 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3162563 /var/tmp/spdk.sock 00:04:50.385 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3162563 ']' 00:04:50.385 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.385 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.385 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.385 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.385 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.385 [2024-10-14 14:18:31.043592] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:50.385 [2024-10-14 14:18:31.043640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162563 ] 00:04:50.385 [2024-10-14 14:18:31.103630] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.385 [2024-10-14 14:18:31.103656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:50.646 [2024-10-14 14:18:31.141585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.646 [2024-10-14 14:18:31.141722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.646 [2024-10-14 14:18:31.141725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.646 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.646 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:50.646 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3162577 00:04:50.646 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3162577 /var/tmp/spdk2.sock 00:04:50.646 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:50.646 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3162577 ']' 00:04:50.646 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.646 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.646 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.646 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.646 14:18:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.907 [2024-10-14 14:18:31.387438] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:50.907 [2024-10-14 14:18:31.387494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162577 ] 00:04:50.907 [2024-10-14 14:18:31.459443] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.907 [2024-10-14 14:18:31.459468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:50.907 [2024-10-14 14:18:31.522597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.907 [2024-10-14 14:18:31.526183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.907 [2024-10-14 14:18:31.526185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:51.478 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.478 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:51.478 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:51.478 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.478 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.478 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.478 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.478 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.479 [2024-10-14 14:18:32.191122] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3162563 has claimed it. 00:04:51.479 request: 00:04:51.479 { 00:04:51.479 "method": "framework_enable_cpumask_locks", 00:04:51.479 "req_id": 1 00:04:51.479 } 00:04:51.479 Got JSON-RPC error response 00:04:51.479 response: 00:04:51.479 { 00:04:51.479 "code": -32603, 00:04:51.479 "message": "Failed to claim CPU core: 2" 00:04:51.479 } 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3162563 /var/tmp/spdk.sock 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3162563 ']' 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.479 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.739 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.739 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:51.739 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3162577 /var/tmp/spdk2.sock 00:04:51.739 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3162577 ']' 00:04:51.739 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.739 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.739 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.739 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.739 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.000 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.000 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:52.000 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:52.000 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:52.000 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:52.000 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:52.000 00:04:52.000 real 0m1.563s 00:04:52.000 user 0m0.721s 00:04:52.000 sys 0m0.133s 00:04:52.000 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.000 14:18:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.000 ************************************ 00:04:52.000 END TEST locking_overlapped_coremask_via_rpc 00:04:52.000 ************************************ 00:04:52.000 14:18:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:52.000 14:18:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3162563 ]] 00:04:52.000 14:18:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3162563 00:04:52.000 14:18:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3162563 ']' 00:04:52.000 14:18:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3162563 00:04:52.000 14:18:32 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:52.000 14:18:32 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.000 14:18:32 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3162563 00:04:52.000 14:18:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:52.000 14:18:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:52.000 14:18:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3162563' 00:04:52.000 killing process with pid 3162563 00:04:52.000 14:18:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3162563 00:04:52.000 14:18:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3162563 00:04:52.260 14:18:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3162577 ]] 00:04:52.260 14:18:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3162577 00:04:52.260 14:18:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3162577 ']' 00:04:52.260 14:18:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3162577 00:04:52.260 14:18:32 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:52.260 14:18:32 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:52.260 14:18:32 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3162577 00:04:52.260 14:18:32 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:52.260 14:18:32 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:52.260 14:18:32 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3162577' 00:04:52.260 killing process with pid 3162577 00:04:52.260 14:18:32 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3162577 00:04:52.260 14:18:32 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3162577 00:04:52.520 14:18:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:52.520 14:18:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:52.520 14:18:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3162563 ]] 00:04:52.520 14:18:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3162563 00:04:52.520 14:18:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3162563 ']' 00:04:52.520 14:18:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3162563 00:04:52.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3162563) - No such process 00:04:52.520 14:18:33 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3162563 is not found' 00:04:52.520 Process with pid 3162563 is not found 00:04:52.520 14:18:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3162577 ]] 00:04:52.520 14:18:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3162577 00:04:52.520 14:18:33 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3162577 ']' 00:04:52.520 14:18:33 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3162577 00:04:52.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3162577) - No such process 00:04:52.520 14:18:33 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3162577 is not found' 00:04:52.520 Process with pid 3162577 is not found 00:04:52.520 14:18:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:52.520 00:04:52.520 real 0m15.991s 00:04:52.520 user 0m27.195s 00:04:52.520 sys 0m4.847s 00:04:52.520 14:18:33 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.520 14:18:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.520 ************************************ 00:04:52.520 END TEST cpu_locks 00:04:52.520 ************************************ 00:04:52.520 00:04:52.520 real 0m40.968s 00:04:52.520 user 1m18.895s 00:04:52.520 sys 0m8.078s 00:04:52.520 14:18:33 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.520 14:18:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.520 ************************************ 00:04:52.520 END TEST event 00:04:52.521 ************************************ 00:04:52.521 14:18:33 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:52.521 14:18:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.521 14:18:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.521 14:18:33 -- common/autotest_common.sh@10 -- # set +x 00:04:52.782 ************************************ 00:04:52.782 START TEST thread 00:04:52.782 ************************************ 00:04:52.782 14:18:33 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:52.782 * Looking for test storage... 00:04:52.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:52.782 14:18:33 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:52.782 14:18:33 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:04:52.782 14:18:33 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:52.782 14:18:33 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:52.782 14:18:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.782 14:18:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.782 14:18:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.782 14:18:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.782 14:18:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.782 14:18:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.782 14:18:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.782 14:18:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.782 14:18:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.782 14:18:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.782 14:18:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.782 14:18:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:52.782 14:18:33 thread -- scripts/common.sh@345 -- # : 1 00:04:52.782 14:18:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.782 14:18:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.782 14:18:33 thread -- scripts/common.sh@365 -- # decimal 1 00:04:52.782 14:18:33 thread -- scripts/common.sh@353 -- # local d=1 00:04:52.782 14:18:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.782 14:18:33 thread -- scripts/common.sh@355 -- # echo 1 00:04:52.782 14:18:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.782 14:18:33 thread -- scripts/common.sh@366 -- # decimal 2 00:04:52.782 14:18:33 thread -- scripts/common.sh@353 -- # local d=2 00:04:52.782 14:18:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.782 14:18:33 thread -- scripts/common.sh@355 -- # echo 2 00:04:52.782 14:18:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.782 14:18:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.782 14:18:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.782 14:18:33 thread -- scripts/common.sh@368 -- # return 0 00:04:52.782 14:18:33 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.782 14:18:33 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:52.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.782 --rc genhtml_branch_coverage=1 00:04:52.782 --rc genhtml_function_coverage=1 00:04:52.782 --rc genhtml_legend=1 00:04:52.782 --rc geninfo_all_blocks=1 00:04:52.782 --rc geninfo_unexecuted_blocks=1 00:04:52.782 00:04:52.782 ' 00:04:52.783 14:18:33 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:52.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.783 --rc genhtml_branch_coverage=1 00:04:52.783 --rc genhtml_function_coverage=1 00:04:52.783 --rc genhtml_legend=1 00:04:52.783 --rc geninfo_all_blocks=1 00:04:52.783 --rc geninfo_unexecuted_blocks=1 00:04:52.783 00:04:52.783 ' 00:04:52.783 14:18:33 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:52.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.783 --rc genhtml_branch_coverage=1 00:04:52.783 --rc genhtml_function_coverage=1 00:04:52.783 --rc genhtml_legend=1 00:04:52.783 --rc geninfo_all_blocks=1 00:04:52.783 --rc geninfo_unexecuted_blocks=1 00:04:52.783 00:04:52.783 ' 00:04:52.783 14:18:33 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:52.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.783 --rc genhtml_branch_coverage=1 00:04:52.783 --rc genhtml_function_coverage=1 00:04:52.783 --rc genhtml_legend=1 00:04:52.783 --rc geninfo_all_blocks=1 00:04:52.783 --rc geninfo_unexecuted_blocks=1 00:04:52.783 00:04:52.783 ' 00:04:52.783 14:18:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:52.783 14:18:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:52.783 14:18:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.783 14:18:33 thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.783 ************************************ 00:04:52.783 START TEST thread_poller_perf 00:04:52.783 ************************************ 00:04:52.783 14:18:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:53.043 [2024-10-14 14:18:33.518589] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:53.043 [2024-10-14 14:18:33.518698] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3163157 ] 00:04:53.043 [2024-10-14 14:18:33.588585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.043 [2024-10-14 14:18:33.632829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.043 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:53.983 [2024-10-14T12:18:34.710Z] ====================================== 00:04:53.983 [2024-10-14T12:18:34.710Z] busy:2406657664 (cyc) 00:04:53.983 [2024-10-14T12:18:34.710Z] total_run_count: 286000 00:04:53.983 [2024-10-14T12:18:34.710Z] tsc_hz: 2400000000 (cyc) 00:04:53.983 [2024-10-14T12:18:34.710Z] ====================================== 00:04:53.983 [2024-10-14T12:18:34.710Z] poller_cost: 8414 (cyc), 3505 (nsec) 00:04:53.983 00:04:53.983 real 0m1.177s 00:04:53.983 user 0m1.107s 00:04:53.983 sys 0m0.066s 00:04:53.983 14:18:34 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.983 14:18:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:53.983 ************************************ 00:04:53.983 END TEST thread_poller_perf 00:04:53.983 ************************************ 00:04:53.983 14:18:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:53.983 14:18:34 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:53.983 14:18:34 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.983 14:18:34 thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.243 ************************************ 00:04:54.243 START TEST thread_poller_perf 00:04:54.243 ************************************ 00:04:54.243 14:18:34 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:54.243 [2024-10-14 14:18:34.772672] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:54.244 [2024-10-14 14:18:34.772777] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3163370 ] 00:04:54.244 [2024-10-14 14:18:34.838456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.244 [2024-10-14 14:18:34.875571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.244 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:55.186 [2024-10-14T12:18:35.913Z] ====================================== 00:04:55.186 [2024-10-14T12:18:35.913Z] busy:2402102354 (cyc) 00:04:55.186 [2024-10-14T12:18:35.913Z] total_run_count: 3812000 00:04:55.186 [2024-10-14T12:18:35.913Z] tsc_hz: 2400000000 (cyc) 00:04:55.186 [2024-10-14T12:18:35.913Z] ====================================== 00:04:55.186 [2024-10-14T12:18:35.913Z] poller_cost: 630 (cyc), 262 (nsec) 00:04:55.186 00:04:55.186 real 0m1.157s 00:04:55.186 user 0m1.093s 00:04:55.186 sys 0m0.060s 00:04:55.186 14:18:35 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.186 14:18:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.186 ************************************ 00:04:55.186 END TEST thread_poller_perf 00:04:55.186 ************************************ 00:04:55.446 14:18:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:55.447 00:04:55.447 real 0m2.692s 00:04:55.447 user 0m2.399s 00:04:55.447 sys 0m0.306s 00:04:55.447 14:18:35 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.447 14:18:35 thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.447 ************************************ 00:04:55.447 END TEST thread 00:04:55.447 ************************************ 00:04:55.447 14:18:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:55.447 14:18:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:55.447 14:18:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:55.447 14:18:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:55.447 14:18:35 -- common/autotest_common.sh@10 -- # set +x 00:04:55.447 ************************************ 00:04:55.447 START TEST app_cmdline 00:04:55.447 ************************************ 00:04:55.447 14:18:36 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:55.447 * Looking for test storage... 00:04:55.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:55.447 14:18:36 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:55.447 14:18:36 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:55.447 14:18:36 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:04:55.707 14:18:36 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.707 14:18:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:55.707 14:18:36 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.707 14:18:36 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:55.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.707 --rc genhtml_branch_coverage=1 00:04:55.707 --rc genhtml_function_coverage=1 00:04:55.707 --rc genhtml_legend=1 00:04:55.707 --rc geninfo_all_blocks=1 00:04:55.707 --rc geninfo_unexecuted_blocks=1 00:04:55.707 00:04:55.707 ' 00:04:55.707 14:18:36 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:55.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.707 --rc genhtml_branch_coverage=1 00:04:55.707 --rc genhtml_function_coverage=1 00:04:55.707 --rc genhtml_legend=1 00:04:55.707 --rc geninfo_all_blocks=1 00:04:55.707 --rc geninfo_unexecuted_blocks=1 00:04:55.707 00:04:55.707 ' 00:04:55.707 14:18:36 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:55.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.707 --rc genhtml_branch_coverage=1 00:04:55.707 --rc genhtml_function_coverage=1 00:04:55.707 --rc genhtml_legend=1 00:04:55.707 --rc geninfo_all_blocks=1 00:04:55.707 --rc geninfo_unexecuted_blocks=1 00:04:55.707 00:04:55.707 ' 00:04:55.707 14:18:36 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:55.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.707 --rc genhtml_branch_coverage=1 00:04:55.707 --rc genhtml_function_coverage=1 00:04:55.707 --rc genhtml_legend=1 00:04:55.707 --rc geninfo_all_blocks=1 00:04:55.707 --rc geninfo_unexecuted_blocks=1 00:04:55.707 00:04:55.707 ' 00:04:55.707 14:18:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:55.707 14:18:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3163771 00:04:55.707 14:18:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3163771 00:04:55.707 14:18:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:55.707 14:18:36 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3163771 ']' 00:04:55.707 14:18:36 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.707 14:18:36 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.707 14:18:36 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.707 14:18:36 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.707 14:18:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:55.707 [2024-10-14 14:18:36.292726] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:04:55.708 [2024-10-14 14:18:36.292802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3163771 ] 00:04:55.708 [2024-10-14 14:18:36.358584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.708 [2024-10-14 14:18:36.402537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:04:56.649 14:18:37 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:56.649 { 00:04:56.649 "version": "SPDK v25.01-pre git sha1 118c273ab", 00:04:56.649 "fields": { 00:04:56.649 "major": 25, 00:04:56.649 "minor": 1, 00:04:56.649 "patch": 0, 00:04:56.649 "suffix": "-pre", 00:04:56.649 "commit": "118c273ab" 00:04:56.649 } 00:04:56.649 } 00:04:56.649 14:18:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:56.649 14:18:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:56.649 14:18:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:56.649 14:18:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:56.649 14:18:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:56.649 14:18:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:56.649 14:18:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.649 14:18:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:56.649 14:18:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:56.649 14:18:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:56.649 14:18:37 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:56.909 request: 00:04:56.909 { 00:04:56.909 "method": "env_dpdk_get_mem_stats", 00:04:56.909 "req_id": 1 00:04:56.909 } 00:04:56.909 Got JSON-RPC error response 00:04:56.909 response: 00:04:56.909 { 00:04:56.909 "code": -32601, 00:04:56.909 "message": "Method not found" 00:04:56.909 } 00:04:56.909 14:18:37 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:04:56.909 14:18:37 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:56.909 14:18:37 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:56.909 14:18:37 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:56.909 14:18:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3163771 00:04:56.909 14:18:37 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3163771 ']' 00:04:56.909 14:18:37 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3163771 00:04:56.909 14:18:37 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:04:56.909 14:18:37 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.909 14:18:37 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3163771 00:04:56.909 14:18:37 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:56.909 14:18:37 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:56.909 14:18:37 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3163771' 00:04:56.909 killing process with pid 3163771 00:04:56.909 14:18:37 app_cmdline -- common/autotest_common.sh@969 -- # kill 3163771 00:04:56.909 14:18:37 app_cmdline -- common/autotest_common.sh@974 -- # wait 3163771 00:04:57.169 00:04:57.169 real 0m1.692s 00:04:57.169 user 0m2.011s 00:04:57.169 sys 0m0.442s 00:04:57.169 14:18:37 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.169 14:18:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:57.169 ************************************ 00:04:57.169 END TEST app_cmdline 00:04:57.169 ************************************ 00:04:57.169 14:18:37 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:57.169 14:18:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.170 14:18:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.170 14:18:37 -- common/autotest_common.sh@10 -- # set +x 00:04:57.170 ************************************ 00:04:57.170 START TEST version 00:04:57.170 ************************************ 00:04:57.170 14:18:37 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:57.170 * Looking for test storage... 00:04:57.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:57.170 14:18:37 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.170 14:18:37 version -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.170 14:18:37 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:57.429 14:18:37 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:57.429 14:18:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.429 14:18:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.429 14:18:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.429 14:18:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.429 14:18:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.429 14:18:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.429 14:18:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.430 14:18:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.430 14:18:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.430 14:18:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.430 14:18:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.430 14:18:37 version -- scripts/common.sh@344 -- # case "$op" in 00:04:57.430 14:18:37 version -- scripts/common.sh@345 -- # : 1 00:04:57.430 14:18:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.430 14:18:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.430 14:18:37 version -- scripts/common.sh@365 -- # decimal 1 00:04:57.430 14:18:37 version -- scripts/common.sh@353 -- # local d=1 00:04:57.430 14:18:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.430 14:18:37 version -- scripts/common.sh@355 -- # echo 1 00:04:57.430 14:18:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.430 14:18:37 version -- scripts/common.sh@366 -- # decimal 2 00:04:57.430 14:18:37 version -- scripts/common.sh@353 -- # local d=2 00:04:57.430 14:18:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.430 14:18:37 version -- scripts/common.sh@355 -- # echo 2 00:04:57.430 14:18:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.430 14:18:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.430 14:18:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.430 14:18:37 version -- scripts/common.sh@368 -- # return 0 00:04:57.430 14:18:37 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.430 14:18:37 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.430 --rc genhtml_branch_coverage=1 00:04:57.430 --rc genhtml_function_coverage=1 00:04:57.430 --rc genhtml_legend=1 00:04:57.430 --rc geninfo_all_blocks=1 00:04:57.430 --rc geninfo_unexecuted_blocks=1 00:04:57.430 00:04:57.430 ' 00:04:57.430 14:18:37 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.430 --rc genhtml_branch_coverage=1 00:04:57.430 --rc genhtml_function_coverage=1 00:04:57.430 --rc genhtml_legend=1 00:04:57.430 --rc geninfo_all_blocks=1 00:04:57.430 --rc geninfo_unexecuted_blocks=1 00:04:57.430 00:04:57.430 ' 00:04:57.430 14:18:37 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.430 --rc genhtml_branch_coverage=1 00:04:57.430 --rc genhtml_function_coverage=1 00:04:57.430 --rc genhtml_legend=1 00:04:57.430 --rc geninfo_all_blocks=1 00:04:57.430 --rc geninfo_unexecuted_blocks=1 00:04:57.430 00:04:57.430 ' 00:04:57.430 14:18:37 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.430 --rc genhtml_branch_coverage=1 00:04:57.430 --rc genhtml_function_coverage=1 00:04:57.430 --rc genhtml_legend=1 00:04:57.430 --rc geninfo_all_blocks=1 00:04:57.430 --rc geninfo_unexecuted_blocks=1 00:04:57.430 00:04:57.430 ' 00:04:57.430 14:18:37 version -- app/version.sh@17 -- # get_header_version major 00:04:57.430 14:18:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:57.430 14:18:37 version -- app/version.sh@14 -- # cut -f2 00:04:57.430 14:18:37 version -- app/version.sh@14 -- # tr -d '"' 00:04:57.430 14:18:37 version -- app/version.sh@17 -- # major=25 00:04:57.430 14:18:37 version -- app/version.sh@18 -- # get_header_version minor 00:04:57.430 14:18:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:57.430 14:18:37 version -- app/version.sh@14 -- # cut -f2 00:04:57.430 14:18:37 version -- app/version.sh@14 -- # tr -d '"' 00:04:57.430 14:18:37 version -- app/version.sh@18 -- # minor=1 00:04:57.430 14:18:38 version -- app/version.sh@19 -- # get_header_version patch 00:04:57.430 14:18:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:57.430 14:18:38 version -- app/version.sh@14 -- # cut -f2 00:04:57.430 14:18:38 version -- app/version.sh@14 -- # tr -d '"' 00:04:57.430 14:18:38 version -- app/version.sh@19 -- # patch=0 00:04:57.430 14:18:38 version -- app/version.sh@20 -- # get_header_version suffix 00:04:57.430 14:18:38 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:57.430 14:18:38 version -- app/version.sh@14 -- # cut -f2 00:04:57.430 14:18:38 version -- app/version.sh@14 -- # tr -d '"' 00:04:57.430 14:18:38 version -- app/version.sh@20 -- # suffix=-pre 00:04:57.430 14:18:38 version -- app/version.sh@22 -- # version=25.1 00:04:57.430 14:18:38 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:57.430 14:18:38 version -- app/version.sh@28 -- # version=25.1rc0 00:04:57.430 14:18:38 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:57.430 14:18:38 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:57.430 14:18:38 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:57.430 14:18:38 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:57.430 00:04:57.430 real 0m0.264s 00:04:57.430 user 0m0.152s 00:04:57.430 sys 0m0.156s 00:04:57.430 14:18:38 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.430 14:18:38 version -- common/autotest_common.sh@10 -- # set +x 00:04:57.430 ************************************ 00:04:57.430 END TEST version 00:04:57.430 ************************************ 00:04:57.430 14:18:38 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:57.430 14:18:38 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:57.430 14:18:38 -- spdk/autotest.sh@194 -- # uname -s 00:04:57.430 14:18:38 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:57.430 14:18:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:57.430 14:18:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:57.430 14:18:38 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:57.430 14:18:38 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:04:57.430 14:18:38 -- spdk/autotest.sh@256 -- # timing_exit lib 00:04:57.430 14:18:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.430 14:18:38 -- common/autotest_common.sh@10 -- # set +x 00:04:57.430 14:18:38 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:04:57.430 14:18:38 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:04:57.430 14:18:38 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:04:57.430 14:18:38 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:04:57.430 14:18:38 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:04:57.430 14:18:38 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:04:57.430 14:18:38 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:57.430 14:18:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:57.430 14:18:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.430 14:18:38 -- common/autotest_common.sh@10 -- # set +x 00:04:57.691 ************************************ 00:04:57.691 START TEST nvmf_tcp 00:04:57.691 ************************************ 00:04:57.691 14:18:38 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:57.691 * Looking for test storage... 00:04:57.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:57.691 14:18:38 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.691 14:18:38 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.691 14:18:38 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:57.691 14:18:38 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.691 14:18:38 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:57.691 14:18:38 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.691 14:18:38 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.691 --rc genhtml_branch_coverage=1 00:04:57.691 --rc genhtml_function_coverage=1 00:04:57.691 --rc genhtml_legend=1 00:04:57.691 --rc geninfo_all_blocks=1 00:04:57.691 --rc geninfo_unexecuted_blocks=1 00:04:57.691 00:04:57.691 ' 00:04:57.691 14:18:38 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.691 --rc genhtml_branch_coverage=1 00:04:57.691 --rc genhtml_function_coverage=1 00:04:57.691 --rc genhtml_legend=1 00:04:57.691 --rc geninfo_all_blocks=1 00:04:57.691 --rc geninfo_unexecuted_blocks=1 00:04:57.691 00:04:57.691 ' 00:04:57.691 14:18:38 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.691 --rc genhtml_branch_coverage=1 00:04:57.691 --rc genhtml_function_coverage=1 00:04:57.691 --rc genhtml_legend=1 00:04:57.691 --rc geninfo_all_blocks=1 00:04:57.691 --rc geninfo_unexecuted_blocks=1 00:04:57.691 00:04:57.691 ' 00:04:57.691 14:18:38 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.691 --rc genhtml_branch_coverage=1 00:04:57.691 --rc genhtml_function_coverage=1 00:04:57.691 --rc genhtml_legend=1 00:04:57.691 --rc geninfo_all_blocks=1 00:04:57.691 --rc geninfo_unexecuted_blocks=1 00:04:57.691 00:04:57.691 ' 00:04:57.691 14:18:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:57.691 14:18:38 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:57.691 14:18:38 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:57.691 14:18:38 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:57.691 14:18:38 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.691 14:18:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.691 ************************************ 00:04:57.691 START TEST nvmf_target_core 00:04:57.691 ************************************ 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:57.954 * Looking for test storage... 00:04:57.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.954 --rc genhtml_branch_coverage=1 00:04:57.954 --rc genhtml_function_coverage=1 00:04:57.954 --rc genhtml_legend=1 00:04:57.954 --rc geninfo_all_blocks=1 00:04:57.954 --rc geninfo_unexecuted_blocks=1 00:04:57.954 00:04:57.954 ' 00:04:57.954 14:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.954 --rc genhtml_branch_coverage=1 00:04:57.954 --rc genhtml_function_coverage=1 00:04:57.954 --rc genhtml_legend=1 00:04:57.954 --rc geninfo_all_blocks=1 00:04:57.954 --rc geninfo_unexecuted_blocks=1 00:04:57.954 00:04:57.954 ' 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.955 --rc genhtml_branch_coverage=1 00:04:57.955 --rc genhtml_function_coverage=1 00:04:57.955 --rc genhtml_legend=1 00:04:57.955 --rc geninfo_all_blocks=1 00:04:57.955 --rc geninfo_unexecuted_blocks=1 00:04:57.955 00:04:57.955 ' 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.955 --rc genhtml_branch_coverage=1 00:04:57.955 --rc genhtml_function_coverage=1 00:04:57.955 --rc genhtml_legend=1 00:04:57.955 --rc geninfo_all_blocks=1 00:04:57.955 --rc geninfo_unexecuted_blocks=1 00:04:57.955 00:04:57.955 ' 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:57.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.955 14:18:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:58.217 ************************************ 00:04:58.217 START TEST nvmf_abort 00:04:58.217 ************************************ 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:58.217 * Looking for test storage... 00:04:58.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.217 --rc genhtml_branch_coverage=1 00:04:58.217 --rc genhtml_function_coverage=1 00:04:58.217 --rc genhtml_legend=1 00:04:58.217 --rc geninfo_all_blocks=1 00:04:58.217 --rc geninfo_unexecuted_blocks=1 00:04:58.217 00:04:58.217 ' 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.217 --rc genhtml_branch_coverage=1 00:04:58.217 --rc genhtml_function_coverage=1 00:04:58.217 --rc genhtml_legend=1 00:04:58.217 --rc geninfo_all_blocks=1 00:04:58.217 --rc geninfo_unexecuted_blocks=1 00:04:58.217 00:04:58.217 ' 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.217 --rc genhtml_branch_coverage=1 00:04:58.217 --rc genhtml_function_coverage=1 00:04:58.217 --rc genhtml_legend=1 00:04:58.217 --rc geninfo_all_blocks=1 00:04:58.217 --rc geninfo_unexecuted_blocks=1 00:04:58.217 00:04:58.217 ' 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.217 --rc genhtml_branch_coverage=1 00:04:58.217 --rc genhtml_function_coverage=1 00:04:58.217 --rc genhtml_legend=1 00:04:58.217 --rc geninfo_all_blocks=1 00:04:58.217 --rc geninfo_unexecuted_blocks=1 00:04:58.217 00:04:58.217 ' 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:58.217 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:58.218 14:18:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.361 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:06.362 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:06.362 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:06.362 Found net devices under 0000:31:00.0: cvl_0_0 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:06.362 Found net devices under 0000:31:00.1: cvl_0_1 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:06.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:06.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:05:06.362 00:05:06.362 --- 10.0.0.2 ping statistics --- 00:05:06.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:06.362 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:06.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:06.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:05:06.362 00:05:06.362 --- 10.0.0.1 ping statistics --- 00:05:06.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:06.362 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:06.362 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:06.363 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.363 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=3168324 00:05:06.363 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 3168324 00:05:06.363 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:06.363 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3168324 ']' 00:05:06.363 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.363 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:06.363 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.363 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:06.363 14:18:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.363 [2024-10-14 14:18:46.494339] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:05:06.363 [2024-10-14 14:18:46.494440] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:06.363 [2024-10-14 14:18:46.586574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:06.363 [2024-10-14 14:18:46.640361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:06.363 [2024-10-14 14:18:46.640430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:06.363 [2024-10-14 14:18:46.640439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:06.363 [2024-10-14 14:18:46.640446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:06.363 [2024-10-14 14:18:46.640452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:06.363 [2024-10-14 14:18:46.642304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.363 [2024-10-14 14:18:46.642471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.363 [2024-10-14 14:18:46.642472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.624 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:06.624 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:06.624 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:06.624 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:06.624 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.624 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:06.624 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:06.624 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.624 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.624 [2024-10-14 14:18:47.345526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.624 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.624 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:06.624 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.624 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.885 Malloc0 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.885 Delay0 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.885 [2024-10-14 14:18:47.436695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.885 14:18:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:06.885 [2024-10-14 14:18:47.556492] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:09.428 Initializing NVMe Controllers 00:05:09.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:09.428 controller IO queue size 128 less than required 00:05:09.428 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:09.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:09.428 Initialization complete. Launching workers. 00:05:09.428 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28532 00:05:09.428 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28597, failed to submit 62 00:05:09.428 success 28536, unsuccessful 61, failed 0 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:09.428 rmmod nvme_tcp 00:05:09.428 rmmod nvme_fabrics 00:05:09.428 rmmod nvme_keyring 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 3168324 ']' 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 3168324 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3168324 ']' 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3168324 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3168324 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3168324' 00:05:09.428 killing process with pid 3168324 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3168324 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3168324 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:09.428 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:05:09.429 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:05:09.429 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:05:09.429 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:09.429 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:09.429 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:09.429 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:09.429 14:18:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:11.343 14:18:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:11.343 00:05:11.343 real 0m13.251s 00:05:11.343 user 0m13.651s 00:05:11.343 sys 0m6.474s 00:05:11.343 14:18:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.343 14:18:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:11.343 ************************************ 00:05:11.343 END TEST nvmf_abort 00:05:11.343 ************************************ 00:05:11.343 14:18:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:11.343 14:18:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:11.343 14:18:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.343 14:18:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:11.343 ************************************ 00:05:11.343 START TEST nvmf_ns_hotplug_stress 00:05:11.343 ************************************ 00:05:11.343 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:11.606 * Looking for test storage... 00:05:11.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:11.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.606 --rc genhtml_branch_coverage=1 00:05:11.606 --rc genhtml_function_coverage=1 00:05:11.606 --rc genhtml_legend=1 00:05:11.606 --rc geninfo_all_blocks=1 00:05:11.606 --rc geninfo_unexecuted_blocks=1 00:05:11.606 00:05:11.606 ' 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:11.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.606 --rc genhtml_branch_coverage=1 00:05:11.606 --rc genhtml_function_coverage=1 00:05:11.606 --rc genhtml_legend=1 00:05:11.606 --rc geninfo_all_blocks=1 00:05:11.606 --rc geninfo_unexecuted_blocks=1 00:05:11.606 00:05:11.606 ' 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:11.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.606 --rc genhtml_branch_coverage=1 00:05:11.606 --rc genhtml_function_coverage=1 00:05:11.606 --rc genhtml_legend=1 00:05:11.606 --rc geninfo_all_blocks=1 00:05:11.606 --rc geninfo_unexecuted_blocks=1 00:05:11.606 00:05:11.606 ' 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:11.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.606 --rc genhtml_branch_coverage=1 00:05:11.606 --rc genhtml_function_coverage=1 00:05:11.606 --rc genhtml_legend=1 00:05:11.606 --rc geninfo_all_blocks=1 00:05:11.606 --rc geninfo_unexecuted_blocks=1 00:05:11.606 00:05:11.606 ' 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:11.606 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:11.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:11.607 14:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:19.748 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:19.748 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:19.748 Found net devices under 0000:31:00.0: cvl_0_0 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:19.748 Found net devices under 0000:31:00.1: cvl_0_1 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:19.748 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:19.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:19.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:05:19.749 00:05:19.749 --- 10.0.0.2 ping statistics --- 00:05:19.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:19.749 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:19.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:19.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:05:19.749 00:05:19.749 --- 10.0.0.1 ping statistics --- 00:05:19.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:19.749 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=3173425 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 3173425 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3173425 ']' 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.749 14:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:19.749 [2024-10-14 14:18:59.807004] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:05:19.749 [2024-10-14 14:18:59.807087] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:19.749 [2024-10-14 14:18:59.897870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:19.749 [2024-10-14 14:18:59.948703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:19.749 [2024-10-14 14:18:59.948752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:19.749 [2024-10-14 14:18:59.948761] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:19.749 [2024-10-14 14:18:59.948768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:19.749 [2024-10-14 14:18:59.948774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:19.749 [2024-10-14 14:18:59.950669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.749 [2024-10-14 14:18:59.950838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.749 [2024-10-14 14:18:59.950839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.010 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.010 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:20.010 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:20.010 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.010 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:20.010 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:20.010 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:20.010 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:20.270 [2024-10-14 14:19:00.796018] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.270 14:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:20.625 14:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:20.625 [2024-10-14 14:19:01.161493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:20.625 14:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:20.921 14:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:20.921 Malloc0 00:05:20.921 14:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:21.220 Delay0 00:05:21.220 14:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.220 14:19:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:21.481 NULL1 00:05:21.481 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:21.742 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:21.742 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3173816 00:05:21.742 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:21.742 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.742 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.003 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:22.003 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:22.263 true 00:05:22.263 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:22.263 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.523 14:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.523 14:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:22.523 14:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:22.785 true 00:05:22.785 14:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:22.785 14:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.785 Read completed with error (sct=0, sc=11) 00:05:23.045 14:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.045 14:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:23.045 14:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:23.305 true 00:05:23.305 14:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:23.305 14:19:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.245 14:19:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.245 14:19:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:24.245 14:19:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:24.505 true 00:05:24.505 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:24.505 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.765 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.765 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:24.765 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:25.026 true 00:05:25.026 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:25.026 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.287 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.287 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:25.287 14:19:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:25.547 true 00:05:25.547 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:25.547 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.806 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.806 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:25.807 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:26.066 true 00:05:26.066 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:26.066 14:19:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.448 14:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.448 14:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:27.448 14:19:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:27.448 true 00:05:27.707 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:27.707 14:19:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.277 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.538 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:28.538 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:28.799 true 00:05:28.799 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:28.799 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.060 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.060 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:29.060 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:29.321 true 00:05:29.321 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:29.321 14:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.262 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.523 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.523 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:30.523 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:30.783 true 00:05:30.783 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:30.783 14:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.721 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.721 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:31.721 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:31.981 true 00:05:31.981 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:31.981 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.240 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.240 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:32.240 14:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:32.500 true 00:05:32.500 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:32.500 14:19:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.881 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.881 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.882 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:33.882 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:34.141 true 00:05:34.141 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:34.141 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.402 14:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.402 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:34.402 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:34.662 true 00:05:34.662 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:34.662 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.922 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.922 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:34.922 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:35.183 true 00:05:35.183 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:35.183 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.442 14:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.442 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:35.442 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:35.701 true 00:05:35.701 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:35.701 14:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.085 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.085 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:37.085 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:37.345 true 00:05:37.345 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:37.345 14:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.284 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.284 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:38.284 14:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:38.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.545 true 00:05:38.545 14:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:38.545 14:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.545 14:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.806 14:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:38.806 14:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:39.066 true 00:05:39.066 14:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:39.066 14:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.008 14:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.269 14:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:40.269 14:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:40.530 true 00:05:40.530 14:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:40.530 14:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.472 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.473 14:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.473 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.473 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:41.473 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:41.733 true 00:05:41.733 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:41.733 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.994 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.994 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:41.994 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:42.254 true 00:05:42.254 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:42.254 14:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.636 14:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.636 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:43.636 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:43.636 true 00:05:43.895 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:43.895 14:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.463 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.722 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:44.722 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:44.981 true 00:05:44.981 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:44.981 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.240 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.241 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:45.241 14:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:45.500 true 00:05:45.500 14:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:45.500 14:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.884 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.884 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.884 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:46.884 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:47.148 true 00:05:47.148 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:47.148 14:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.089 14:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.089 14:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:48.089 14:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:48.089 true 00:05:48.349 14:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:48.349 14:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.349 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.609 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:48.609 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:48.870 true 00:05:48.870 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:48.870 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.870 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.130 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:49.130 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:49.390 true 00:05:49.390 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:49.390 14:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.390 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.650 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:49.650 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:49.911 true 00:05:49.911 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:49.911 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.171 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.171 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:50.171 14:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:50.431 true 00:05:50.431 14:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:50.431 14:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.371 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.371 14:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.371 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:51.371 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:51.631 true 00:05:51.631 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:51.631 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.890 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.891 Initializing NVMe Controllers 00:05:51.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:51.891 Controller IO queue size 128, less than required. 00:05:51.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:51.891 Controller IO queue size 128, less than required. 00:05:51.891 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:51.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:51.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:51.891 Initialization complete. Launching workers. 00:05:51.891 ======================================================== 00:05:51.891 Latency(us) 00:05:51.891 Device Information : IOPS MiB/s Average min max 00:05:51.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2227.48 1.09 33686.51 1580.27 1100580.34 00:05:51.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17134.21 8.37 7445.83 1439.74 410168.89 00:05:51.891 ======================================================== 00:05:51.891 Total : 19361.69 9.45 10464.71 1439.74 1100580.34 00:05:51.891 00:05:51.891 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:51.891 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:52.150 true 00:05:52.150 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3173816 00:05:52.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3173816) - No such process 00:05:52.150 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3173816 00:05:52.150 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.411 14:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:52.411 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:52.411 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:52.411 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:52.411 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.411 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:52.671 null0 00:05:52.671 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.671 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.671 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:52.931 null1 00:05:52.931 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.931 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.931 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:52.931 null2 00:05:52.931 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.931 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.931 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:53.191 null3 00:05:53.191 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:53.191 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:53.191 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:53.451 null4 00:05:53.451 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:53.451 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:53.451 14:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:53.451 null5 00:05:53.451 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:53.451 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:53.451 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:53.711 null6 00:05:53.711 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:53.711 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:53.711 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:53.972 null7 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3180444 3180446 3180449 3180452 3180455 3180458 3180461 3180464 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.972 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.233 14:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.493 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.752 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:55.012 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.013 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.013 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:55.013 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.013 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.013 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:55.013 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:55.273 14:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.534 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.795 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:56.055 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.056 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:56.316 14:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:56.316 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:56.578 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.838 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:57.098 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:57.359 14:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:57.359 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.359 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.359 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:57.359 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.359 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.359 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:57.359 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:57.359 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:57.359 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.359 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.619 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:57.878 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.878 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.878 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.878 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.878 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:58.138 rmmod nvme_tcp 00:05:58.138 rmmod nvme_fabrics 00:05:58.138 rmmod nvme_keyring 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 3173425 ']' 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 3173425 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3173425 ']' 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3173425 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3173425 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3173425' 00:05:58.138 killing process with pid 3173425 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3173425 00:05:58.138 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3173425 00:05:58.398 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:58.398 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:05:58.398 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:05:58.398 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:58.398 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:05:58.398 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:05:58.398 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:05:58.398 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:58.398 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:58.398 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:58.398 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:58.398 14:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:00.305 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:00.305 00:06:00.305 real 0m48.989s 00:06:00.305 user 3m11.765s 00:06:00.305 sys 0m15.640s 00:06:00.305 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.305 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:00.305 ************************************ 00:06:00.305 END TEST nvmf_ns_hotplug_stress 00:06:00.305 ************************************ 00:06:00.565 14:19:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:00.565 14:19:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:00.565 14:19:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.565 14:19:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:00.566 ************************************ 00:06:00.566 START TEST nvmf_delete_subsystem 00:06:00.566 ************************************ 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:00.566 * Looking for test storage... 00:06:00.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:00.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.566 --rc genhtml_branch_coverage=1 00:06:00.566 --rc genhtml_function_coverage=1 00:06:00.566 --rc genhtml_legend=1 00:06:00.566 --rc geninfo_all_blocks=1 00:06:00.566 --rc geninfo_unexecuted_blocks=1 00:06:00.566 00:06:00.566 ' 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:00.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.566 --rc genhtml_branch_coverage=1 00:06:00.566 --rc genhtml_function_coverage=1 00:06:00.566 --rc genhtml_legend=1 00:06:00.566 --rc geninfo_all_blocks=1 00:06:00.566 --rc geninfo_unexecuted_blocks=1 00:06:00.566 00:06:00.566 ' 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:00.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.566 --rc genhtml_branch_coverage=1 00:06:00.566 --rc genhtml_function_coverage=1 00:06:00.566 --rc genhtml_legend=1 00:06:00.566 --rc geninfo_all_blocks=1 00:06:00.566 --rc geninfo_unexecuted_blocks=1 00:06:00.566 00:06:00.566 ' 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:00.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.566 --rc genhtml_branch_coverage=1 00:06:00.566 --rc genhtml_function_coverage=1 00:06:00.566 --rc genhtml_legend=1 00:06:00.566 --rc geninfo_all_blocks=1 00:06:00.566 --rc geninfo_unexecuted_blocks=1 00:06:00.566 00:06:00.566 ' 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.566 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:00.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:00.827 14:19:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:08.968 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.968 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:08.969 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:08.969 Found net devices under 0000:31:00.0: cvl_0_0 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:08.969 Found net devices under 0000:31:00.1: cvl_0_1 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:08.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:08.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:06:08.969 00:06:08.969 --- 10.0.0.2 ping statistics --- 00:06:08.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.969 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:08.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:08.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:06:08.969 00:06:08.969 --- 10.0.0.1 ping statistics --- 00:06:08.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.969 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=3185877 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 3185877 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3185877 ']' 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.969 14:19:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.969 [2024-10-14 14:19:48.922148] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:06:08.969 [2024-10-14 14:19:48.922195] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:08.969 [2024-10-14 14:19:48.988218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.969 [2024-10-14 14:19:49.023210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:08.969 [2024-10-14 14:19:49.023241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:08.969 [2024-10-14 14:19:49.023250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.969 [2024-10-14 14:19:49.023258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.969 [2024-10-14 14:19:49.023264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:08.969 [2024-10-14 14:19:49.024542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.969 [2024-10-14 14:19:49.024544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.969 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.969 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:08.969 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:08.969 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:08.969 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.231 [2024-10-14 14:19:49.733098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.231 [2024-10-14 14:19:49.749292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.231 NULL1 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.231 Delay0 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3186006 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:09.231 14:19:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:09.231 [2024-10-14 14:19:49.834060] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:11.145 14:19:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:11.145 14:19:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.145 14:19:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.406 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 [2024-10-14 14:19:51.957345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206dfd0 is same with the state(6) to be set 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 starting I/O failed: -6 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 [2024-10-14 14:19:51.962027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7af8000c00 is same with the state(6) to be set 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Write completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.407 Read completed with error (sct=0, sc=8) 00:06:11.408 Write completed with error (sct=0, sc=8) 00:06:12.347 [2024-10-14 14:19:52.931240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206f6b0 is same with the state(6) to be set 00:06:12.347 Write completed with error (sct=0, sc=8) 00:06:12.347 Write completed with error (sct=0, sc=8) 00:06:12.347 Read completed with error (sct=0, sc=8) 00:06:12.347 Write completed with error (sct=0, sc=8) 00:06:12.347 Read completed with error (sct=0, sc=8) 00:06:12.347 Read completed with error (sct=0, sc=8) 00:06:12.347 Read completed with error (sct=0, sc=8) 00:06:12.347 Read completed with error (sct=0, sc=8) 00:06:12.347 Read completed with error (sct=0, sc=8) 00:06:12.347 Write completed with error (sct=0, sc=8) 00:06:12.347 Read completed with error (sct=0, sc=8) 00:06:12.347 Read completed with error (sct=0, sc=8) 00:06:12.347 Write completed with error (sct=0, sc=8) 00:06:12.347 Read completed with error (sct=0, sc=8) 00:06:12.347 Write completed with error (sct=0, sc=8) 00:06:12.347 Write completed with error (sct=0, sc=8) 00:06:12.347 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 [2024-10-14 14:19:52.960686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206e1b0 is same with the state(6) to be set 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 [2024-10-14 14:19:52.960990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206e6c0 is same with the state(6) to be set 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 [2024-10-14 14:19:52.964558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7af800cfe0 is same with the state(6) to be set 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Read completed with error (sct=0, sc=8) 00:06:12.348 Write completed with error (sct=0, sc=8) 00:06:12.348 [2024-10-14 14:19:52.964739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7af800d780 is same with the state(6) to be set 00:06:12.348 Initializing NVMe Controllers 00:06:12.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:12.348 Controller IO queue size 128, less than required. 00:06:12.348 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:12.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:12.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:12.348 Initialization complete. Launching workers. 00:06:12.348 ======================================================== 00:06:12.348 Latency(us) 00:06:12.348 Device Information : IOPS MiB/s Average min max 00:06:12.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.90 0.08 906536.75 256.83 1044197.90 00:06:12.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.91 0.08 958502.34 322.85 2002715.52 00:06:12.348 ======================================================== 00:06:12.348 Total : 327.81 0.16 932361.59 256.83 2002715.52 00:06:12.348 00:06:12.348 [2024-10-14 14:19:52.965398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206f6b0 (9): Bad file descriptor 00:06:12.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:12.348 14:19:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.348 14:19:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:12.348 14:19:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3186006 00:06:12.348 14:19:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3186006 00:06:12.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3186006) - No such process 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3186006 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3186006 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3186006 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.917 [2024-10-14 14:19:53.497292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3186879 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3186879 00:06:12.917 14:19:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:12.917 [2024-10-14 14:19:53.574760] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:13.486 14:19:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:13.486 14:19:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3186879 00:06:13.486 14:19:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:14.055 14:19:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:14.055 14:19:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3186879 00:06:14.055 14:19:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:14.316 14:19:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:14.316 14:19:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3186879 00:06:14.316 14:19:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:14.887 14:19:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:14.887 14:19:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3186879 00:06:14.887 14:19:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:15.458 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:15.458 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3186879 00:06:15.458 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:16.029 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:16.029 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3186879 00:06:16.029 14:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:16.029 Initializing NVMe Controllers 00:06:16.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:16.029 Controller IO queue size 128, less than required. 00:06:16.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:16.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:16.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:16.029 Initialization complete. Launching workers. 00:06:16.029 ======================================================== 00:06:16.029 Latency(us) 00:06:16.029 Device Information : IOPS MiB/s Average min max 00:06:16.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001978.65 1000186.02 1007935.41 00:06:16.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002775.90 1000169.45 1008914.38 00:06:16.029 ======================================================== 00:06:16.029 Total : 256.00 0.12 1002377.28 1000169.45 1008914.38 00:06:16.029 00:06:16.597 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:16.597 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3186879 00:06:16.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3186879) - No such process 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3186879 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:16.598 rmmod nvme_tcp 00:06:16.598 rmmod nvme_fabrics 00:06:16.598 rmmod nvme_keyring 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 3185877 ']' 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 3185877 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3185877 ']' 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3185877 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3185877 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3185877' 00:06:16.598 killing process with pid 3185877 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3185877 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3185877 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:16.598 14:19:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.139 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:19.139 00:06:19.139 real 0m18.282s 00:06:19.139 user 0m30.577s 00:06:19.139 sys 0m6.735s 00:06:19.139 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.139 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.139 ************************************ 00:06:19.139 END TEST nvmf_delete_subsystem 00:06:19.139 ************************************ 00:06:19.139 14:19:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:19.140 ************************************ 00:06:19.140 START TEST nvmf_host_management 00:06:19.140 ************************************ 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:19.140 * Looking for test storage... 00:06:19.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:19.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.140 --rc genhtml_branch_coverage=1 00:06:19.140 --rc genhtml_function_coverage=1 00:06:19.140 --rc genhtml_legend=1 00:06:19.140 --rc geninfo_all_blocks=1 00:06:19.140 --rc geninfo_unexecuted_blocks=1 00:06:19.140 00:06:19.140 ' 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:19.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.140 --rc genhtml_branch_coverage=1 00:06:19.140 --rc genhtml_function_coverage=1 00:06:19.140 --rc genhtml_legend=1 00:06:19.140 --rc geninfo_all_blocks=1 00:06:19.140 --rc geninfo_unexecuted_blocks=1 00:06:19.140 00:06:19.140 ' 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:19.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.140 --rc genhtml_branch_coverage=1 00:06:19.140 --rc genhtml_function_coverage=1 00:06:19.140 --rc genhtml_legend=1 00:06:19.140 --rc geninfo_all_blocks=1 00:06:19.140 --rc geninfo_unexecuted_blocks=1 00:06:19.140 00:06:19.140 ' 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:19.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.140 --rc genhtml_branch_coverage=1 00:06:19.140 --rc genhtml_function_coverage=1 00:06:19.140 --rc genhtml_legend=1 00:06:19.140 --rc geninfo_all_blocks=1 00:06:19.140 --rc geninfo_unexecuted_blocks=1 00:06:19.140 00:06:19.140 ' 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.140 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:19.141 14:19:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:27.284 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:27.284 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:27.284 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:27.285 Found net devices under 0000:31:00.0: cvl_0_0 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:27.285 Found net devices under 0000:31:00.1: cvl_0_1 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:27.285 14:20:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:27.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:27.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:06:27.285 00:06:27.285 --- 10.0.0.2 ping statistics --- 00:06:27.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.285 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:27.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:27.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:06:27.285 00:06:27.285 --- 10.0.0.1 ping statistics --- 00:06:27.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.285 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=3191990 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 3191990 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3191990 ']' 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.285 14:20:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.285 [2024-10-14 14:20:07.322702] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:06:27.285 [2024-10-14 14:20:07.322751] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.285 [2024-10-14 14:20:07.408310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.285 [2024-10-14 14:20:07.450771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.285 [2024-10-14 14:20:07.450814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.285 [2024-10-14 14:20:07.450823] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.285 [2024-10-14 14:20:07.450830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.285 [2024-10-14 14:20:07.450836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.285 [2024-10-14 14:20:07.452616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.285 [2024-10-14 14:20:07.452775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.285 [2024-10-14 14:20:07.452936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.285 [2024-10-14 14:20:07.452936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.545 [2024-10-14 14:20:08.168866] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.545 Malloc0 00:06:27.545 [2024-10-14 14:20:08.241304] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.545 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3192060 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3192060 /var/tmp/bdevperf.sock 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3192060 ']' 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:27.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:27.806 { 00:06:27.806 "params": { 00:06:27.806 "name": "Nvme$subsystem", 00:06:27.806 "trtype": "$TEST_TRANSPORT", 00:06:27.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:27.806 "adrfam": "ipv4", 00:06:27.806 "trsvcid": "$NVMF_PORT", 00:06:27.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:27.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:27.806 "hdgst": ${hdgst:-false}, 00:06:27.806 "ddgst": ${ddgst:-false} 00:06:27.806 }, 00:06:27.806 "method": "bdev_nvme_attach_controller" 00:06:27.806 } 00:06:27.806 EOF 00:06:27.806 )") 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:27.806 14:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:27.806 "params": { 00:06:27.806 "name": "Nvme0", 00:06:27.806 "trtype": "tcp", 00:06:27.806 "traddr": "10.0.0.2", 00:06:27.806 "adrfam": "ipv4", 00:06:27.806 "trsvcid": "4420", 00:06:27.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:27.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:27.806 "hdgst": false, 00:06:27.806 "ddgst": false 00:06:27.806 }, 00:06:27.806 "method": "bdev_nvme_attach_controller" 00:06:27.806 }' 00:06:27.806 [2024-10-14 14:20:08.346138] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:06:27.806 [2024-10-14 14:20:08.346190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192060 ] 00:06:27.806 [2024-10-14 14:20:08.407507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.806 [2024-10-14 14:20:08.443832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.067 Running I/O for 10 seconds... 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:28.641 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:28.642 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.642 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:28.642 [2024-10-14 14:20:09.212389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.212906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1d5f0 is same with the state(6) to be set 00:06:28.642 [2024-10-14 14:20:09.213297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.642 [2024-10-14 14:20:09.213334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.642 [2024-10-14 14:20:09.213352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.642 [2024-10-14 14:20:09.213361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.642 [2024-10-14 14:20:09.213371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.642 [2024-10-14 14:20:09.213379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.642 [2024-10-14 14:20:09.213388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.642 [2024-10-14 14:20:09.213395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.642 [2024-10-14 14:20:09.213405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.642 [2024-10-14 14:20:09.213412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.642 [2024-10-14 14:20:09.213421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.642 [2024-10-14 14:20:09.213429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.642 [2024-10-14 14:20:09.213438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.642 [2024-10-14 14:20:09.213445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.642 [2024-10-14 14:20:09.213455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.642 [2024-10-14 14:20:09.213462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.642 [2024-10-14 14:20:09.213472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.642 [2024-10-14 14:20:09.213479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.642 [2024-10-14 14:20:09.213493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.642 [2024-10-14 14:20:09.213500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.642 [2024-10-14 14:20:09.213510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.642 [2024-10-14 14:20:09.213517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.642 [2024-10-14 14:20:09.213526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.642 [2024-10-14 14:20:09.213533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.213986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.213995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.214011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.214028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.214045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.214067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.214085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.214101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.214118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.214138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.214154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.214171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.214188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.214204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.214221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.643 [2024-10-14 14:20:09.214237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.643 [2024-10-14 14:20:09.214244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.644 [2024-10-14 14:20:09.214253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.644 [2024-10-14 14:20:09.214261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.644 [2024-10-14 14:20:09.214270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.644 [2024-10-14 14:20:09.214277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.644 [2024-10-14 14:20:09.214287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.644 [2024-10-14 14:20:09.214293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.644 [2024-10-14 14:20:09.214303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.644 [2024-10-14 14:20:09.214310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.644 [2024-10-14 14:20:09.214320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.644 [2024-10-14 14:20:09.214327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.644 [2024-10-14 14:20:09.214338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.644 [2024-10-14 14:20:09.214345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.644 [2024-10-14 14:20:09.214354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.644 [2024-10-14 14:20:09.214361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.644 [2024-10-14 14:20:09.214372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.644 [2024-10-14 14:20:09.214379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.644 [2024-10-14 14:20:09.214388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.644 [2024-10-14 14:20:09.214396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.644 [2024-10-14 14:20:09.214405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:28.644 [2024-10-14 14:20:09.214413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.644 [2024-10-14 14:20:09.214422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26411e0 is same with the state(6) to be set 00:06:28.644 [2024-10-14 14:20:09.214466] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26411e0 was disconnected and freed. reset controller. 00:06:28.644 [2024-10-14 14:20:09.215695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:28.644 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.644 task offset: 98304 on job bdev=Nvme0n1 fails 00:06:28.644 00:06:28.644 Latency(us) 00:06:28.644 [2024-10-14T12:20:09.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:28.644 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:28.644 Job: Nvme0n1 ended in about 0.55 seconds with error 00:06:28.644 Verification LBA range: start 0x0 length 0x400 00:06:28.644 Nvme0n1 : 0.55 1403.74 87.73 116.98 0.00 41047.57 8956.59 36918.61 00:06:28.644 [2024-10-14T12:20:09.371Z] =================================================================================================================== 00:06:28.644 [2024-10-14T12:20:09.371Z] Total : 1403.74 87.73 116.98 0.00 41047.57 8956.59 36918.61 00:06:28.644 [2024-10-14 14:20:09.217715] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:28.644 [2024-10-14 14:20:09.217739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2428100 (9): Bad file descriptor 00:06:28.644 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:28.644 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.644 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:28.644 [2024-10-14 14:20:09.224180] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:28.644 [2024-10-14 14:20:09.224266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:28.644 [2024-10-14 14:20:09.224296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:28.644 [2024-10-14 14:20:09.224318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:28.644 [2024-10-14 14:20:09.224327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:28.644 [2024-10-14 14:20:09.224335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:28.644 [2024-10-14 14:20:09.224342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2428100 00:06:28.644 [2024-10-14 14:20:09.224363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2428100 (9): Bad file descriptor 00:06:28.644 [2024-10-14 14:20:09.224376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:06:28.644 [2024-10-14 14:20:09.224383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:06:28.644 [2024-10-14 14:20:09.224392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:06:28.644 [2024-10-14 14:20:09.224406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:28.644 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.644 14:20:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:29.586 14:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3192060 00:06:29.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3192060) - No such process 00:06:29.586 14:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:29.586 14:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:29.586 14:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:29.586 14:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:29.586 14:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:29.586 14:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:29.586 14:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:29.586 14:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:29.586 { 00:06:29.586 "params": { 00:06:29.586 "name": "Nvme$subsystem", 00:06:29.586 "trtype": "$TEST_TRANSPORT", 00:06:29.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:29.586 "adrfam": "ipv4", 00:06:29.586 "trsvcid": "$NVMF_PORT", 00:06:29.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:29.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:29.586 "hdgst": ${hdgst:-false}, 00:06:29.586 "ddgst": ${ddgst:-false} 00:06:29.586 }, 00:06:29.586 "method": "bdev_nvme_attach_controller" 00:06:29.586 } 00:06:29.586 EOF 00:06:29.586 )") 00:06:29.586 14:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:29.586 14:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:29.586 14:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:29.586 14:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:29.586 "params": { 00:06:29.586 "name": "Nvme0", 00:06:29.586 "trtype": "tcp", 00:06:29.586 "traddr": "10.0.0.2", 00:06:29.586 "adrfam": "ipv4", 00:06:29.586 "trsvcid": "4420", 00:06:29.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:29.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:29.586 "hdgst": false, 00:06:29.586 "ddgst": false 00:06:29.587 }, 00:06:29.587 "method": "bdev_nvme_attach_controller" 00:06:29.587 }' 00:06:29.587 [2024-10-14 14:20:10.285609] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:06:29.587 [2024-10-14 14:20:10.285665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192541 ] 00:06:29.848 [2024-10-14 14:20:10.347369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.848 [2024-10-14 14:20:10.382978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.848 Running I/O for 1 seconds... 00:06:31.232 1854.00 IOPS, 115.88 MiB/s 00:06:31.232 Latency(us) 00:06:31.232 [2024-10-14T12:20:11.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:31.232 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:31.232 Verification LBA range: start 0x0 length 0x400 00:06:31.232 Nvme0n1 : 1.02 1878.22 117.39 0.00 0.00 33450.95 3904.85 31457.28 00:06:31.232 [2024-10-14T12:20:11.959Z] =================================================================================================================== 00:06:31.232 [2024-10-14T12:20:11.959Z] Total : 1878.22 117.39 0.00 0.00 33450.95 3904.85 31457.28 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:31.232 rmmod nvme_tcp 00:06:31.232 rmmod nvme_fabrics 00:06:31.232 rmmod nvme_keyring 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 3191990 ']' 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 3191990 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3191990 ']' 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3191990 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3191990 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3191990' 00:06:31.232 killing process with pid 3191990 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3191990 00:06:31.232 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3191990 00:06:31.232 [2024-10-14 14:20:11.960435] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:31.493 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:31.493 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:31.493 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:31.493 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:31.493 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:06:31.493 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:06:31.493 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:31.493 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:31.493 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:31.493 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.493 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.493 14:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.573 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:33.573 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:33.573 00:06:33.573 real 0m14.605s 00:06:33.573 user 0m22.732s 00:06:33.573 sys 0m6.712s 00:06:33.573 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.573 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.573 ************************************ 00:06:33.573 END TEST nvmf_host_management 00:06:33.573 ************************************ 00:06:33.573 14:20:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:33.573 14:20:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:33.573 14:20:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.573 14:20:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:33.573 ************************************ 00:06:33.573 START TEST nvmf_lvol 00:06:33.573 ************************************ 00:06:33.573 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:33.573 * Looking for test storage... 00:06:33.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:33.573 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:33.573 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:33.573 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:33.834 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:33.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.835 --rc genhtml_branch_coverage=1 00:06:33.835 --rc genhtml_function_coverage=1 00:06:33.835 --rc genhtml_legend=1 00:06:33.835 --rc geninfo_all_blocks=1 00:06:33.835 --rc geninfo_unexecuted_blocks=1 00:06:33.835 00:06:33.835 ' 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:33.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.835 --rc genhtml_branch_coverage=1 00:06:33.835 --rc genhtml_function_coverage=1 00:06:33.835 --rc genhtml_legend=1 00:06:33.835 --rc geninfo_all_blocks=1 00:06:33.835 --rc geninfo_unexecuted_blocks=1 00:06:33.835 00:06:33.835 ' 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:33.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.835 --rc genhtml_branch_coverage=1 00:06:33.835 --rc genhtml_function_coverage=1 00:06:33.835 --rc genhtml_legend=1 00:06:33.835 --rc geninfo_all_blocks=1 00:06:33.835 --rc geninfo_unexecuted_blocks=1 00:06:33.835 00:06:33.835 ' 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:33.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.835 --rc genhtml_branch_coverage=1 00:06:33.835 --rc genhtml_function_coverage=1 00:06:33.835 --rc genhtml_legend=1 00:06:33.835 --rc geninfo_all_blocks=1 00:06:33.835 --rc geninfo_unexecuted_blocks=1 00:06:33.835 00:06:33.835 ' 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:33.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:33.835 14:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:41.977 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:41.978 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:41.978 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:41.978 Found net devices under 0000:31:00.0: cvl_0_0 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:41.978 Found net devices under 0000:31:00.1: cvl_0_1 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:41.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:41.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:06:41.978 00:06:41.978 --- 10.0.0.2 ping statistics --- 00:06:41.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.978 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:41.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:41.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:06:41.978 00:06:41.978 --- 10.0.0.1 ping statistics --- 00:06:41.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.978 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=3197164 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 3197164 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3197164 ']' 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.978 14:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:41.978 [2024-10-14 14:20:21.972973] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:06:41.978 [2024-10-14 14:20:21.973045] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.978 [2024-10-14 14:20:22.049355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.978 [2024-10-14 14:20:22.092571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:41.978 [2024-10-14 14:20:22.092612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:41.978 [2024-10-14 14:20:22.092620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:41.978 [2024-10-14 14:20:22.092627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:41.978 [2024-10-14 14:20:22.092632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:41.978 [2024-10-14 14:20:22.094317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.978 [2024-10-14 14:20:22.094490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.978 [2024-10-14 14:20:22.094495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.243 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.243 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:42.243 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:42.243 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:42.243 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:42.243 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.243 14:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:42.502 [2024-10-14 14:20:22.982034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.502 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:42.502 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:42.502 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:42.762 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:42.762 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:43.023 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:43.283 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ec27ba1a-fd14-4004-9dae-81fc963df504 00:06:43.283 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ec27ba1a-fd14-4004-9dae-81fc963df504 lvol 20 00:06:43.283 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c105eb4c-d1dd-4791-95f9-e18a11ea3844 00:06:43.283 14:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:43.543 14:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c105eb4c-d1dd-4791-95f9-e18a11ea3844 00:06:43.804 14:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:43.804 [2024-10-14 14:20:24.461948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:43.804 14:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:44.064 14:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3197852 00:06:44.064 14:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:44.064 14:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:45.005 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c105eb4c-d1dd-4791-95f9-e18a11ea3844 MY_SNAPSHOT 00:06:45.265 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=80c1d818-cb6b-4267-b128-0587155d411d 00:06:45.265 14:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c105eb4c-d1dd-4791-95f9-e18a11ea3844 30 00:06:45.526 14:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 80c1d818-cb6b-4267-b128-0587155d411d MY_CLONE 00:06:45.786 14:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7c49362c-b62d-4a4b-87f1-6710a2c21ea7 00:06:45.786 14:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7c49362c-b62d-4a4b-87f1-6710a2c21ea7 00:06:46.047 14:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3197852 00:06:56.041 Initializing NVMe Controllers 00:06:56.041 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:56.041 Controller IO queue size 128, less than required. 00:06:56.041 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:56.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:56.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:56.042 Initialization complete. Launching workers. 00:06:56.042 ======================================================== 00:06:56.042 Latency(us) 00:06:56.042 Device Information : IOPS MiB/s Average min max 00:06:56.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12305.40 48.07 10407.42 1526.84 52511.26 00:06:56.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17687.70 69.09 7237.08 1280.01 39092.71 00:06:56.042 ======================================================== 00:06:56.042 Total : 29993.10 117.16 8537.79 1280.01 52511.26 00:06:56.042 00:06:56.042 14:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c105eb4c-d1dd-4791-95f9-e18a11ea3844 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec27ba1a-fd14-4004-9dae-81fc963df504 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:56.042 rmmod nvme_tcp 00:06:56.042 rmmod nvme_fabrics 00:06:56.042 rmmod nvme_keyring 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 3197164 ']' 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 3197164 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3197164 ']' 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3197164 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3197164 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3197164' 00:06:56.042 killing process with pid 3197164 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3197164 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3197164 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:56.042 14:20:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.426 14:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:57.426 00:06:57.426 real 0m23.725s 00:06:57.426 user 1m4.084s 00:06:57.426 sys 0m8.551s 00:06:57.426 14:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.426 14:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:57.426 ************************************ 00:06:57.426 END TEST nvmf_lvol 00:06:57.426 ************************************ 00:06:57.426 14:20:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:57.426 14:20:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:57.426 14:20:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.426 14:20:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:57.426 ************************************ 00:06:57.426 START TEST nvmf_lvs_grow 00:06:57.426 ************************************ 00:06:57.426 14:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:57.426 * Looking for test storage... 00:06:57.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:57.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.427 --rc genhtml_branch_coverage=1 00:06:57.427 --rc genhtml_function_coverage=1 00:06:57.427 --rc genhtml_legend=1 00:06:57.427 --rc geninfo_all_blocks=1 00:06:57.427 --rc geninfo_unexecuted_blocks=1 00:06:57.427 00:06:57.427 ' 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:57.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.427 --rc genhtml_branch_coverage=1 00:06:57.427 --rc genhtml_function_coverage=1 00:06:57.427 --rc genhtml_legend=1 00:06:57.427 --rc geninfo_all_blocks=1 00:06:57.427 --rc geninfo_unexecuted_blocks=1 00:06:57.427 00:06:57.427 ' 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:57.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.427 --rc genhtml_branch_coverage=1 00:06:57.427 --rc genhtml_function_coverage=1 00:06:57.427 --rc genhtml_legend=1 00:06:57.427 --rc geninfo_all_blocks=1 00:06:57.427 --rc geninfo_unexecuted_blocks=1 00:06:57.427 00:06:57.427 ' 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:57.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.427 --rc genhtml_branch_coverage=1 00:06:57.427 --rc genhtml_function_coverage=1 00:06:57.427 --rc genhtml_legend=1 00:06:57.427 --rc geninfo_all_blocks=1 00:06:57.427 --rc geninfo_unexecuted_blocks=1 00:06:57.427 00:06:57.427 ' 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.427 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:57.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:57.687 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:57.688 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.688 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.688 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.688 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:57.688 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:57.688 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:57.688 14:20:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:05.827 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:05.827 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:05.827 Found net devices under 0000:31:00.0: cvl_0_0 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.827 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:05.828 Found net devices under 0000:31:00.1: cvl_0_1 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:05.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:07:05.828 00:07:05.828 --- 10.0.0.2 ping statistics --- 00:07:05.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.828 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:05.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:07:05.828 00:07:05.828 --- 10.0.0.1 ping statistics --- 00:07:05.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.828 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=3204287 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 3204287 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3204287 ']' 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.828 14:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:05.828 [2024-10-14 14:20:45.736668] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:07:05.828 [2024-10-14 14:20:45.736732] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.828 [2024-10-14 14:20:45.809605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.828 [2024-10-14 14:20:45.851930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.828 [2024-10-14 14:20:45.851968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.828 [2024-10-14 14:20:45.851976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.828 [2024-10-14 14:20:45.851983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.828 [2024-10-14 14:20:45.851988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.828 [2024-10-14 14:20:45.852610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.828 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.828 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:05.828 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:05.828 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:05.828 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:05.828 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.828 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:06.089 [2024-10-14 14:20:46.706939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.089 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:06.089 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.089 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.089 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.089 ************************************ 00:07:06.089 START TEST lvs_grow_clean 00:07:06.089 ************************************ 00:07:06.089 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:06.089 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:06.089 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:06.089 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:06.089 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:06.089 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:06.089 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:06.089 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.089 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.089 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:06.349 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:06.349 14:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:06.610 14:20:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=862af001-b320-4360-aa2a-cb637cfe95a2 00:07:06.610 14:20:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 862af001-b320-4360-aa2a-cb637cfe95a2 00:07:06.610 14:20:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:06.610 14:20:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:06.610 14:20:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:06.870 14:20:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 862af001-b320-4360-aa2a-cb637cfe95a2 lvol 150 00:07:06.870 14:20:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1a1588ce-702a-4c70-aeb7-3e4cc3f32d12 00:07:06.870 14:20:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.870 14:20:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:07.130 [2024-10-14 14:20:47.653259] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:07.130 [2024-10-14 14:20:47.653315] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:07.130 true 00:07:07.130 14:20:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:07.130 14:20:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 862af001-b320-4360-aa2a-cb637cfe95a2 00:07:07.130 14:20:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:07.130 14:20:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:07.403 14:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1a1588ce-702a-4c70-aeb7-3e4cc3f32d12 00:07:07.668 14:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:07.668 [2024-10-14 14:20:48.311281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.668 14:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:07.929 14:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3204998 00:07:07.929 14:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:07.929 14:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:07.929 14:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3204998 /var/tmp/bdevperf.sock 00:07:07.929 14:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3204998 ']' 00:07:07.929 14:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:07.929 14:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.929 14:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:07.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:07.929 14:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.929 14:20:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:07.929 [2024-10-14 14:20:48.538402] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:07:07.929 [2024-10-14 14:20:48.538450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3204998 ] 00:07:07.929 [2024-10-14 14:20:48.618655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.930 [2024-10-14 14:20:48.654545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.872 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.872 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:08.872 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:09.133 Nvme0n1 00:07:09.133 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:09.393 [ 00:07:09.393 { 00:07:09.393 "name": "Nvme0n1", 00:07:09.393 "aliases": [ 00:07:09.393 "1a1588ce-702a-4c70-aeb7-3e4cc3f32d12" 00:07:09.393 ], 00:07:09.393 "product_name": "NVMe disk", 00:07:09.393 "block_size": 4096, 00:07:09.394 "num_blocks": 38912, 00:07:09.394 "uuid": "1a1588ce-702a-4c70-aeb7-3e4cc3f32d12", 00:07:09.394 "numa_id": 0, 00:07:09.394 "assigned_rate_limits": { 00:07:09.394 "rw_ios_per_sec": 0, 00:07:09.394 "rw_mbytes_per_sec": 0, 00:07:09.394 "r_mbytes_per_sec": 0, 00:07:09.394 "w_mbytes_per_sec": 0 00:07:09.394 }, 00:07:09.394 "claimed": false, 00:07:09.394 "zoned": false, 00:07:09.394 "supported_io_types": { 00:07:09.394 "read": true, 00:07:09.394 "write": true, 00:07:09.394 "unmap": true, 00:07:09.394 "flush": true, 00:07:09.394 "reset": true, 00:07:09.394 "nvme_admin": true, 00:07:09.394 "nvme_io": true, 00:07:09.394 "nvme_io_md": false, 00:07:09.394 "write_zeroes": true, 00:07:09.394 "zcopy": false, 00:07:09.394 "get_zone_info": false, 00:07:09.394 "zone_management": false, 00:07:09.394 "zone_append": false, 00:07:09.394 "compare": true, 00:07:09.394 "compare_and_write": true, 00:07:09.394 "abort": true, 00:07:09.394 "seek_hole": false, 00:07:09.394 "seek_data": false, 00:07:09.394 "copy": true, 00:07:09.394 "nvme_iov_md": false 00:07:09.394 }, 00:07:09.394 "memory_domains": [ 00:07:09.394 { 00:07:09.394 "dma_device_id": "system", 00:07:09.394 "dma_device_type": 1 00:07:09.394 } 00:07:09.394 ], 00:07:09.394 "driver_specific": { 00:07:09.394 "nvme": [ 00:07:09.394 { 00:07:09.394 "trid": { 00:07:09.394 "trtype": "TCP", 00:07:09.394 "adrfam": "IPv4", 00:07:09.394 "traddr": "10.0.0.2", 00:07:09.394 "trsvcid": "4420", 00:07:09.394 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:09.394 }, 00:07:09.394 "ctrlr_data": { 00:07:09.394 "cntlid": 1, 00:07:09.394 "vendor_id": "0x8086", 00:07:09.394 "model_number": "SPDK bdev Controller", 00:07:09.394 "serial_number": "SPDK0", 00:07:09.394 "firmware_revision": "25.01", 00:07:09.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.394 "oacs": { 00:07:09.394 "security": 0, 00:07:09.394 "format": 0, 00:07:09.394 "firmware": 0, 00:07:09.394 "ns_manage": 0 00:07:09.394 }, 00:07:09.394 "multi_ctrlr": true, 00:07:09.394 "ana_reporting": false 00:07:09.394 }, 00:07:09.394 "vs": { 00:07:09.394 "nvme_version": "1.3" 00:07:09.394 }, 00:07:09.394 "ns_data": { 00:07:09.394 "id": 1, 00:07:09.394 "can_share": true 00:07:09.394 } 00:07:09.394 } 00:07:09.394 ], 00:07:09.394 "mp_policy": "active_passive" 00:07:09.394 } 00:07:09.394 } 00:07:09.394 ] 00:07:09.394 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3205328 00:07:09.394 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:09.394 14:20:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:09.394 Running I/O for 10 seconds... 00:07:10.336 Latency(us) 00:07:10.336 [2024-10-14T12:20:51.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.336 Nvme0n1 : 1.00 17720.00 69.22 0.00 0.00 0.00 0.00 0.00 00:07:10.336 [2024-10-14T12:20:51.063Z] =================================================================================================================== 00:07:10.336 [2024-10-14T12:20:51.063Z] Total : 17720.00 69.22 0.00 0.00 0.00 0.00 0.00 00:07:10.336 00:07:11.278 14:20:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 862af001-b320-4360-aa2a-cb637cfe95a2 00:07:11.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.278 Nvme0n1 : 2.00 17851.50 69.73 0.00 0.00 0.00 0.00 0.00 00:07:11.278 [2024-10-14T12:20:52.005Z] =================================================================================================================== 00:07:11.278 [2024-10-14T12:20:52.005Z] Total : 17851.50 69.73 0.00 0.00 0.00 0.00 0.00 00:07:11.278 00:07:11.540 true 00:07:11.540 14:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 862af001-b320-4360-aa2a-cb637cfe95a2 00:07:11.540 14:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:11.540 14:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:11.540 14:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:11.540 14:20:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3205328 00:07:12.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.482 Nvme0n1 : 3.00 17881.67 69.85 0.00 0.00 0.00 0.00 0.00 00:07:12.482 [2024-10-14T12:20:53.209Z] =================================================================================================================== 00:07:12.482 [2024-10-14T12:20:53.209Z] Total : 17881.67 69.85 0.00 0.00 0.00 0.00 0.00 00:07:12.482 00:07:13.425 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.425 Nvme0n1 : 4.00 17928.25 70.03 0.00 0.00 0.00 0.00 0.00 00:07:13.425 [2024-10-14T12:20:54.152Z] =================================================================================================================== 00:07:13.425 [2024-10-14T12:20:54.152Z] Total : 17928.25 70.03 0.00 0.00 0.00 0.00 0.00 00:07:13.425 00:07:14.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.370 Nvme0n1 : 5.00 17956.20 70.14 0.00 0.00 0.00 0.00 0.00 00:07:14.370 [2024-10-14T12:20:55.097Z] =================================================================================================================== 00:07:14.370 [2024-10-14T12:20:55.097Z] Total : 17956.20 70.14 0.00 0.00 0.00 0.00 0.00 00:07:14.370 00:07:15.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.311 Nvme0n1 : 6.00 17979.83 70.23 0.00 0.00 0.00 0.00 0.00 00:07:15.311 [2024-10-14T12:20:56.038Z] =================================================================================================================== 00:07:15.311 [2024-10-14T12:20:56.038Z] Total : 17979.83 70.23 0.00 0.00 0.00 0.00 0.00 00:07:15.311 00:07:16.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.697 Nvme0n1 : 7.00 17979.57 70.23 0.00 0.00 0.00 0.00 0.00 00:07:16.697 [2024-10-14T12:20:57.424Z] =================================================================================================================== 00:07:16.697 [2024-10-14T12:20:57.424Z] Total : 17979.57 70.23 0.00 0.00 0.00 0.00 0.00 00:07:16.697 00:07:17.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.639 Nvme0n1 : 8.00 18006.00 70.34 0.00 0.00 0.00 0.00 0.00 00:07:17.639 [2024-10-14T12:20:58.366Z] =================================================================================================================== 00:07:17.639 [2024-10-14T12:20:58.366Z] Total : 18006.00 70.34 0.00 0.00 0.00 0.00 0.00 00:07:17.639 00:07:18.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.581 Nvme0n1 : 9.00 18013.44 70.37 0.00 0.00 0.00 0.00 0.00 00:07:18.581 [2024-10-14T12:20:59.308Z] =================================================================================================================== 00:07:18.581 [2024-10-14T12:20:59.308Z] Total : 18013.44 70.37 0.00 0.00 0.00 0.00 0.00 00:07:18.581 00:07:19.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.523 Nvme0n1 : 10.00 18018.70 70.39 0.00 0.00 0.00 0.00 0.00 00:07:19.523 [2024-10-14T12:21:00.250Z] =================================================================================================================== 00:07:19.523 [2024-10-14T12:21:00.250Z] Total : 18018.70 70.39 0.00 0.00 0.00 0.00 0.00 00:07:19.523 00:07:19.523 00:07:19.523 Latency(us) 00:07:19.523 [2024-10-14T12:21:00.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.523 Nvme0n1 : 10.00 18025.77 70.41 0.00 0.00 7097.84 4314.45 13817.17 00:07:19.523 [2024-10-14T12:21:00.250Z] =================================================================================================================== 00:07:19.523 [2024-10-14T12:21:00.250Z] Total : 18025.77 70.41 0.00 0.00 7097.84 4314.45 13817.17 00:07:19.523 { 00:07:19.523 "results": [ 00:07:19.523 { 00:07:19.523 "job": "Nvme0n1", 00:07:19.523 "core_mask": "0x2", 00:07:19.523 "workload": "randwrite", 00:07:19.523 "status": "finished", 00:07:19.523 "queue_depth": 128, 00:07:19.523 "io_size": 4096, 00:07:19.523 "runtime": 10.003178, 00:07:19.523 "iops": 18025.77140984595, 00:07:19.523 "mibps": 70.41316956971075, 00:07:19.523 "io_failed": 0, 00:07:19.523 "io_timeout": 0, 00:07:19.523 "avg_latency_us": 7097.839744003549, 00:07:19.523 "min_latency_us": 4314.453333333333, 00:07:19.523 "max_latency_us": 13817.173333333334 00:07:19.523 } 00:07:19.523 ], 00:07:19.523 "core_count": 1 00:07:19.523 } 00:07:19.523 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3204998 00:07:19.523 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3204998 ']' 00:07:19.523 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3204998 00:07:19.523 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:19.523 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.523 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3204998 00:07:19.523 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:19.523 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:19.523 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3204998' 00:07:19.523 killing process with pid 3204998 00:07:19.523 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3204998 00:07:19.523 Received shutdown signal, test time was about 10.000000 seconds 00:07:19.523 00:07:19.523 Latency(us) 00:07:19.523 [2024-10-14T12:21:00.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.523 [2024-10-14T12:21:00.250Z] =================================================================================================================== 00:07:19.523 [2024-10-14T12:21:00.250Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:19.523 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3204998 00:07:19.523 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:19.784 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:20.044 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 862af001-b320-4360-aa2a-cb637cfe95a2 00:07:20.044 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:20.044 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:20.044 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:20.044 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:20.304 [2024-10-14 14:21:00.908696] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:20.304 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 862af001-b320-4360-aa2a-cb637cfe95a2 00:07:20.304 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:20.304 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 862af001-b320-4360-aa2a-cb637cfe95a2 00:07:20.304 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.304 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.304 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.304 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.304 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.304 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.304 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.304 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:20.304 14:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 862af001-b320-4360-aa2a-cb637cfe95a2 00:07:20.565 request: 00:07:20.565 { 00:07:20.565 "uuid": "862af001-b320-4360-aa2a-cb637cfe95a2", 00:07:20.565 "method": "bdev_lvol_get_lvstores", 00:07:20.565 "req_id": 1 00:07:20.565 } 00:07:20.565 Got JSON-RPC error response 00:07:20.565 response: 00:07:20.565 { 00:07:20.565 "code": -19, 00:07:20.565 "message": "No such device" 00:07:20.565 } 00:07:20.565 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:20.565 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.565 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.565 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.565 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:20.826 aio_bdev 00:07:20.826 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1a1588ce-702a-4c70-aeb7-3e4cc3f32d12 00:07:20.826 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=1a1588ce-702a-4c70-aeb7-3e4cc3f32d12 00:07:20.826 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:20.826 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:20.826 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:20.826 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:20.826 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:20.826 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1a1588ce-702a-4c70-aeb7-3e4cc3f32d12 -t 2000 00:07:21.086 [ 00:07:21.086 { 00:07:21.086 "name": "1a1588ce-702a-4c70-aeb7-3e4cc3f32d12", 00:07:21.086 "aliases": [ 00:07:21.086 "lvs/lvol" 00:07:21.086 ], 00:07:21.086 "product_name": "Logical Volume", 00:07:21.086 "block_size": 4096, 00:07:21.086 "num_blocks": 38912, 00:07:21.086 "uuid": "1a1588ce-702a-4c70-aeb7-3e4cc3f32d12", 00:07:21.086 "assigned_rate_limits": { 00:07:21.086 "rw_ios_per_sec": 0, 00:07:21.086 "rw_mbytes_per_sec": 0, 00:07:21.086 "r_mbytes_per_sec": 0, 00:07:21.086 "w_mbytes_per_sec": 0 00:07:21.086 }, 00:07:21.086 "claimed": false, 00:07:21.086 "zoned": false, 00:07:21.086 "supported_io_types": { 00:07:21.086 "read": true, 00:07:21.086 "write": true, 00:07:21.086 "unmap": true, 00:07:21.086 "flush": false, 00:07:21.086 "reset": true, 00:07:21.086 "nvme_admin": false, 00:07:21.087 "nvme_io": false, 00:07:21.087 "nvme_io_md": false, 00:07:21.087 "write_zeroes": true, 00:07:21.087 "zcopy": false, 00:07:21.087 "get_zone_info": false, 00:07:21.087 "zone_management": false, 00:07:21.087 "zone_append": false, 00:07:21.087 "compare": false, 00:07:21.087 "compare_and_write": false, 00:07:21.087 "abort": false, 00:07:21.087 "seek_hole": true, 00:07:21.087 "seek_data": true, 00:07:21.087 "copy": false, 00:07:21.087 "nvme_iov_md": false 00:07:21.087 }, 00:07:21.087 "driver_specific": { 00:07:21.087 "lvol": { 00:07:21.087 "lvol_store_uuid": "862af001-b320-4360-aa2a-cb637cfe95a2", 00:07:21.087 "base_bdev": "aio_bdev", 00:07:21.087 "thin_provision": false, 00:07:21.087 "num_allocated_clusters": 38, 00:07:21.087 "snapshot": false, 00:07:21.087 "clone": false, 00:07:21.087 "esnap_clone": false 00:07:21.087 } 00:07:21.087 } 00:07:21.087 } 00:07:21.087 ] 00:07:21.087 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:21.087 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 862af001-b320-4360-aa2a-cb637cfe95a2 00:07:21.087 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:21.087 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:21.087 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 862af001-b320-4360-aa2a-cb637cfe95a2 00:07:21.087 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:21.347 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:21.347 14:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1a1588ce-702a-4c70-aeb7-3e4cc3f32d12 00:07:21.640 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 862af001-b320-4360-aa2a-cb637cfe95a2 00:07:21.640 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.900 00:07:21.900 real 0m15.724s 00:07:21.900 user 0m15.447s 00:07:21.900 sys 0m1.338s 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:21.900 ************************************ 00:07:21.900 END TEST lvs_grow_clean 00:07:21.900 ************************************ 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.900 ************************************ 00:07:21.900 START TEST lvs_grow_dirty 00:07:21.900 ************************************ 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.900 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.901 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:22.162 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:22.162 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:22.422 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fed47cb5-47ec-4405-a937-d56fab611df4 00:07:22.422 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fed47cb5-47ec-4405-a937-d56fab611df4 00:07:22.422 14:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:22.422 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:22.422 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:22.422 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fed47cb5-47ec-4405-a937-d56fab611df4 lvol 150 00:07:22.683 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=64c9b37b-7bdd-44c3-b0ce-360040d551a1 00:07:22.683 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.683 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:22.943 [2024-10-14 14:21:03.418188] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:22.943 [2024-10-14 14:21:03.418239] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:22.943 true 00:07:22.943 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fed47cb5-47ec-4405-a937-d56fab611df4 00:07:22.943 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:22.943 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:22.943 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:23.204 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 64c9b37b-7bdd-44c3-b0ce-360040d551a1 00:07:23.464 14:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:23.464 [2024-10-14 14:21:04.092227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.464 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:23.724 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3208198 00:07:23.724 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:23.724 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:23.724 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3208198 /var/tmp/bdevperf.sock 00:07:23.724 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3208198 ']' 00:07:23.724 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:23.724 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.724 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:23.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:23.724 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.724 14:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:23.724 [2024-10-14 14:21:04.322586] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:07:23.725 [2024-10-14 14:21:04.322634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208198 ] 00:07:23.725 [2024-10-14 14:21:04.399596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.725 [2024-10-14 14:21:04.429496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.666 14:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.666 14:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:24.666 14:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:24.928 Nvme0n1 00:07:24.928 14:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:25.190 [ 00:07:25.190 { 00:07:25.190 "name": "Nvme0n1", 00:07:25.190 "aliases": [ 00:07:25.190 "64c9b37b-7bdd-44c3-b0ce-360040d551a1" 00:07:25.190 ], 00:07:25.190 "product_name": "NVMe disk", 00:07:25.190 "block_size": 4096, 00:07:25.190 "num_blocks": 38912, 00:07:25.190 "uuid": "64c9b37b-7bdd-44c3-b0ce-360040d551a1", 00:07:25.190 "numa_id": 0, 00:07:25.190 "assigned_rate_limits": { 00:07:25.190 "rw_ios_per_sec": 0, 00:07:25.190 "rw_mbytes_per_sec": 0, 00:07:25.190 "r_mbytes_per_sec": 0, 00:07:25.190 "w_mbytes_per_sec": 0 00:07:25.190 }, 00:07:25.190 "claimed": false, 00:07:25.190 "zoned": false, 00:07:25.190 "supported_io_types": { 00:07:25.190 "read": true, 00:07:25.190 "write": true, 00:07:25.190 "unmap": true, 00:07:25.190 "flush": true, 00:07:25.190 "reset": true, 00:07:25.190 "nvme_admin": true, 00:07:25.190 "nvme_io": true, 00:07:25.190 "nvme_io_md": false, 00:07:25.190 "write_zeroes": true, 00:07:25.190 "zcopy": false, 00:07:25.190 "get_zone_info": false, 00:07:25.190 "zone_management": false, 00:07:25.190 "zone_append": false, 00:07:25.190 "compare": true, 00:07:25.190 "compare_and_write": true, 00:07:25.190 "abort": true, 00:07:25.190 "seek_hole": false, 00:07:25.190 "seek_data": false, 00:07:25.190 "copy": true, 00:07:25.190 "nvme_iov_md": false 00:07:25.190 }, 00:07:25.190 "memory_domains": [ 00:07:25.190 { 00:07:25.190 "dma_device_id": "system", 00:07:25.190 "dma_device_type": 1 00:07:25.190 } 00:07:25.190 ], 00:07:25.190 "driver_specific": { 00:07:25.190 "nvme": [ 00:07:25.190 { 00:07:25.190 "trid": { 00:07:25.190 "trtype": "TCP", 00:07:25.190 "adrfam": "IPv4", 00:07:25.190 "traddr": "10.0.0.2", 00:07:25.190 "trsvcid": "4420", 00:07:25.190 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:25.190 }, 00:07:25.190 "ctrlr_data": { 00:07:25.190 "cntlid": 1, 00:07:25.190 "vendor_id": "0x8086", 00:07:25.190 "model_number": "SPDK bdev Controller", 00:07:25.190 "serial_number": "SPDK0", 00:07:25.190 "firmware_revision": "25.01", 00:07:25.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:25.190 "oacs": { 00:07:25.190 "security": 0, 00:07:25.190 "format": 0, 00:07:25.190 "firmware": 0, 00:07:25.190 "ns_manage": 0 00:07:25.190 }, 00:07:25.190 "multi_ctrlr": true, 00:07:25.190 "ana_reporting": false 00:07:25.190 }, 00:07:25.190 "vs": { 00:07:25.190 "nvme_version": "1.3" 00:07:25.190 }, 00:07:25.190 "ns_data": { 00:07:25.190 "id": 1, 00:07:25.190 "can_share": true 00:07:25.190 } 00:07:25.190 } 00:07:25.190 ], 00:07:25.190 "mp_policy": "active_passive" 00:07:25.190 } 00:07:25.190 } 00:07:25.190 ] 00:07:25.190 14:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3208535 00:07:25.190 14:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:25.190 14:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:25.190 Running I/O for 10 seconds... 00:07:26.132 Latency(us) 00:07:26.132 [2024-10-14T12:21:06.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.132 Nvme0n1 : 1.00 17792.00 69.50 0.00 0.00 0.00 0.00 0.00 00:07:26.132 [2024-10-14T12:21:06.859Z] =================================================================================================================== 00:07:26.132 [2024-10-14T12:21:06.859Z] Total : 17792.00 69.50 0.00 0.00 0.00 0.00 0.00 00:07:26.132 00:07:27.073 14:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fed47cb5-47ec-4405-a937-d56fab611df4 00:07:27.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.073 Nvme0n1 : 2.00 17919.00 70.00 0.00 0.00 0.00 0.00 0.00 00:07:27.073 [2024-10-14T12:21:07.800Z] =================================================================================================================== 00:07:27.073 [2024-10-14T12:21:07.800Z] Total : 17919.00 70.00 0.00 0.00 0.00 0.00 0.00 00:07:27.073 00:07:27.334 true 00:07:27.334 14:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fed47cb5-47ec-4405-a937-d56fab611df4 00:07:27.334 14:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:27.594 14:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:27.594 14:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:27.595 14:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3208535 00:07:28.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.165 Nvme0n1 : 3.00 17960.67 70.16 0.00 0.00 0.00 0.00 0.00 00:07:28.165 [2024-10-14T12:21:08.892Z] =================================================================================================================== 00:07:28.165 [2024-10-14T12:21:08.892Z] Total : 17960.67 70.16 0.00 0.00 0.00 0.00 0.00 00:07:28.165 00:07:29.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.106 Nvme0n1 : 4.00 18005.25 70.33 0.00 0.00 0.00 0.00 0.00 00:07:29.106 [2024-10-14T12:21:09.833Z] =================================================================================================================== 00:07:29.106 [2024-10-14T12:21:09.833Z] Total : 18005.25 70.33 0.00 0.00 0.00 0.00 0.00 00:07:29.106 00:07:30.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.493 Nvme0n1 : 5.00 18028.20 70.42 0.00 0.00 0.00 0.00 0.00 00:07:30.493 [2024-10-14T12:21:11.220Z] =================================================================================================================== 00:07:30.493 [2024-10-14T12:21:11.220Z] Total : 18028.20 70.42 0.00 0.00 0.00 0.00 0.00 00:07:30.493 00:07:31.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.436 Nvme0n1 : 6.00 18053.67 70.52 0.00 0.00 0.00 0.00 0.00 00:07:31.436 [2024-10-14T12:21:12.163Z] =================================================================================================================== 00:07:31.436 [2024-10-14T12:21:12.163Z] Total : 18053.67 70.52 0.00 0.00 0.00 0.00 0.00 00:07:31.436 00:07:32.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.379 Nvme0n1 : 7.00 18082.00 70.63 0.00 0.00 0.00 0.00 0.00 00:07:32.379 [2024-10-14T12:21:13.106Z] =================================================================================================================== 00:07:32.379 [2024-10-14T12:21:13.106Z] Total : 18082.00 70.63 0.00 0.00 0.00 0.00 0.00 00:07:32.379 00:07:33.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.321 Nvme0n1 : 8.00 18089.00 70.66 0.00 0.00 0.00 0.00 0.00 00:07:33.321 [2024-10-14T12:21:14.048Z] =================================================================================================================== 00:07:33.321 [2024-10-14T12:21:14.048Z] Total : 18089.00 70.66 0.00 0.00 0.00 0.00 0.00 00:07:33.321 00:07:34.264 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.264 Nvme0n1 : 9.00 18108.33 70.74 0.00 0.00 0.00 0.00 0.00 00:07:34.264 [2024-10-14T12:21:14.991Z] =================================================================================================================== 00:07:34.264 [2024-10-14T12:21:14.991Z] Total : 18108.33 70.74 0.00 0.00 0.00 0.00 0.00 00:07:34.264 00:07:35.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.206 Nvme0n1 : 10.00 18109.40 70.74 0.00 0.00 0.00 0.00 0.00 00:07:35.206 [2024-10-14T12:21:15.933Z] =================================================================================================================== 00:07:35.206 [2024-10-14T12:21:15.933Z] Total : 18109.40 70.74 0.00 0.00 0.00 0.00 0.00 00:07:35.206 00:07:35.206 00:07:35.206 Latency(us) 00:07:35.206 [2024-10-14T12:21:15.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.206 Nvme0n1 : 10.00 18115.02 70.76 0.00 0.00 7064.81 4341.76 13325.65 00:07:35.206 [2024-10-14T12:21:15.933Z] =================================================================================================================== 00:07:35.206 [2024-10-14T12:21:15.933Z] Total : 18115.02 70.76 0.00 0.00 7064.81 4341.76 13325.65 00:07:35.206 { 00:07:35.206 "results": [ 00:07:35.206 { 00:07:35.206 "job": "Nvme0n1", 00:07:35.206 "core_mask": "0x2", 00:07:35.206 "workload": "randwrite", 00:07:35.206 "status": "finished", 00:07:35.206 "queue_depth": 128, 00:07:35.206 "io_size": 4096, 00:07:35.206 "runtime": 10.003963, 00:07:35.206 "iops": 18115.021017170897, 00:07:35.206 "mibps": 70.76180084832382, 00:07:35.206 "io_failed": 0, 00:07:35.206 "io_timeout": 0, 00:07:35.206 "avg_latency_us": 7064.807946643712, 00:07:35.206 "min_latency_us": 4341.76, 00:07:35.206 "max_latency_us": 13325.653333333334 00:07:35.206 } 00:07:35.206 ], 00:07:35.206 "core_count": 1 00:07:35.206 } 00:07:35.206 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3208198 00:07:35.206 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3208198 ']' 00:07:35.206 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3208198 00:07:35.206 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:35.206 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.206 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3208198 00:07:35.206 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:35.206 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:35.206 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3208198' 00:07:35.206 killing process with pid 3208198 00:07:35.206 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3208198 00:07:35.206 Received shutdown signal, test time was about 10.000000 seconds 00:07:35.206 00:07:35.206 Latency(us) 00:07:35.206 [2024-10-14T12:21:15.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.206 [2024-10-14T12:21:15.933Z] =================================================================================================================== 00:07:35.206 [2024-10-14T12:21:15.933Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:35.206 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3208198 00:07:35.467 14:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.467 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:35.728 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fed47cb5-47ec-4405-a937-d56fab611df4 00:07:35.728 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3204287 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3204287 00:07:35.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3204287 Killed "${NVMF_APP[@]}" "$@" 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=3211070 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 3211070 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3211070 ']' 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.989 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.989 [2024-10-14 14:21:16.653615] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:07:35.989 [2024-10-14 14:21:16.653669] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.251 [2024-10-14 14:21:16.721537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.251 [2024-10-14 14:21:16.756396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.251 [2024-10-14 14:21:16.756428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.251 [2024-10-14 14:21:16.756435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.251 [2024-10-14 14:21:16.756442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.251 [2024-10-14 14:21:16.756448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.251 [2024-10-14 14:21:16.757058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.251 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.251 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:36.251 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:36.251 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:36.251 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:36.251 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.251 14:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:36.512 [2024-10-14 14:21:17.041698] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:36.512 [2024-10-14 14:21:17.041784] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:36.512 [2024-10-14 14:21:17.041814] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:36.512 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:36.512 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 64c9b37b-7bdd-44c3-b0ce-360040d551a1 00:07:36.512 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=64c9b37b-7bdd-44c3-b0ce-360040d551a1 00:07:36.512 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:36.512 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:36.512 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:36.512 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:36.512 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:36.512 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 64c9b37b-7bdd-44c3-b0ce-360040d551a1 -t 2000 00:07:36.773 [ 00:07:36.773 { 00:07:36.773 "name": "64c9b37b-7bdd-44c3-b0ce-360040d551a1", 00:07:36.773 "aliases": [ 00:07:36.773 "lvs/lvol" 00:07:36.773 ], 00:07:36.773 "product_name": "Logical Volume", 00:07:36.773 "block_size": 4096, 00:07:36.773 "num_blocks": 38912, 00:07:36.773 "uuid": "64c9b37b-7bdd-44c3-b0ce-360040d551a1", 00:07:36.773 "assigned_rate_limits": { 00:07:36.773 "rw_ios_per_sec": 0, 00:07:36.773 "rw_mbytes_per_sec": 0, 00:07:36.773 "r_mbytes_per_sec": 0, 00:07:36.773 "w_mbytes_per_sec": 0 00:07:36.773 }, 00:07:36.773 "claimed": false, 00:07:36.773 "zoned": false, 00:07:36.773 "supported_io_types": { 00:07:36.773 "read": true, 00:07:36.773 "write": true, 00:07:36.773 "unmap": true, 00:07:36.773 "flush": false, 00:07:36.773 "reset": true, 00:07:36.773 "nvme_admin": false, 00:07:36.773 "nvme_io": false, 00:07:36.773 "nvme_io_md": false, 00:07:36.773 "write_zeroes": true, 00:07:36.773 "zcopy": false, 00:07:36.773 "get_zone_info": false, 00:07:36.773 "zone_management": false, 00:07:36.773 "zone_append": false, 00:07:36.773 "compare": false, 00:07:36.773 "compare_and_write": false, 00:07:36.773 "abort": false, 00:07:36.773 "seek_hole": true, 00:07:36.773 "seek_data": true, 00:07:36.773 "copy": false, 00:07:36.773 "nvme_iov_md": false 00:07:36.773 }, 00:07:36.773 "driver_specific": { 00:07:36.773 "lvol": { 00:07:36.773 "lvol_store_uuid": "fed47cb5-47ec-4405-a937-d56fab611df4", 00:07:36.773 "base_bdev": "aio_bdev", 00:07:36.773 "thin_provision": false, 00:07:36.773 "num_allocated_clusters": 38, 00:07:36.773 "snapshot": false, 00:07:36.773 "clone": false, 00:07:36.773 "esnap_clone": false 00:07:36.773 } 00:07:36.773 } 00:07:36.773 } 00:07:36.773 ] 00:07:36.774 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:36.774 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fed47cb5-47ec-4405-a937-d56fab611df4 00:07:36.774 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:37.035 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:37.035 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fed47cb5-47ec-4405-a937-d56fab611df4 00:07:37.035 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:37.035 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:37.035 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:37.296 [2024-10-14 14:21:17.869835] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:37.296 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fed47cb5-47ec-4405-a937-d56fab611df4 00:07:37.296 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:37.296 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fed47cb5-47ec-4405-a937-d56fab611df4 00:07:37.296 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.296 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.296 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.296 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.296 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.296 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:37.296 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.296 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:37.296 14:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fed47cb5-47ec-4405-a937-d56fab611df4 00:07:37.557 request: 00:07:37.557 { 00:07:37.557 "uuid": "fed47cb5-47ec-4405-a937-d56fab611df4", 00:07:37.557 "method": "bdev_lvol_get_lvstores", 00:07:37.557 "req_id": 1 00:07:37.557 } 00:07:37.557 Got JSON-RPC error response 00:07:37.557 response: 00:07:37.557 { 00:07:37.557 "code": -19, 00:07:37.557 "message": "No such device" 00:07:37.557 } 00:07:37.557 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:37.557 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.557 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.557 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.557 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:37.557 aio_bdev 00:07:37.557 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 64c9b37b-7bdd-44c3-b0ce-360040d551a1 00:07:37.557 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=64c9b37b-7bdd-44c3-b0ce-360040d551a1 00:07:37.557 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:37.557 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:37.557 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:37.557 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:37.557 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:37.818 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 64c9b37b-7bdd-44c3-b0ce-360040d551a1 -t 2000 00:07:37.818 [ 00:07:37.818 { 00:07:37.818 "name": "64c9b37b-7bdd-44c3-b0ce-360040d551a1", 00:07:37.818 "aliases": [ 00:07:37.818 "lvs/lvol" 00:07:37.818 ], 00:07:37.818 "product_name": "Logical Volume", 00:07:37.818 "block_size": 4096, 00:07:37.818 "num_blocks": 38912, 00:07:37.818 "uuid": "64c9b37b-7bdd-44c3-b0ce-360040d551a1", 00:07:37.818 "assigned_rate_limits": { 00:07:37.818 "rw_ios_per_sec": 0, 00:07:37.818 "rw_mbytes_per_sec": 0, 00:07:37.818 "r_mbytes_per_sec": 0, 00:07:37.818 "w_mbytes_per_sec": 0 00:07:37.818 }, 00:07:37.818 "claimed": false, 00:07:37.818 "zoned": false, 00:07:37.818 "supported_io_types": { 00:07:37.818 "read": true, 00:07:37.818 "write": true, 00:07:37.818 "unmap": true, 00:07:37.818 "flush": false, 00:07:37.818 "reset": true, 00:07:37.818 "nvme_admin": false, 00:07:37.818 "nvme_io": false, 00:07:37.818 "nvme_io_md": false, 00:07:37.818 "write_zeroes": true, 00:07:37.818 "zcopy": false, 00:07:37.818 "get_zone_info": false, 00:07:37.818 "zone_management": false, 00:07:37.818 "zone_append": false, 00:07:37.818 "compare": false, 00:07:37.818 "compare_and_write": false, 00:07:37.818 "abort": false, 00:07:37.818 "seek_hole": true, 00:07:37.818 "seek_data": true, 00:07:37.818 "copy": false, 00:07:37.818 "nvme_iov_md": false 00:07:37.818 }, 00:07:37.818 "driver_specific": { 00:07:37.818 "lvol": { 00:07:37.818 "lvol_store_uuid": "fed47cb5-47ec-4405-a937-d56fab611df4", 00:07:37.818 "base_bdev": "aio_bdev", 00:07:37.818 "thin_provision": false, 00:07:37.818 "num_allocated_clusters": 38, 00:07:37.818 "snapshot": false, 00:07:37.818 "clone": false, 00:07:37.818 "esnap_clone": false 00:07:37.818 } 00:07:37.818 } 00:07:37.818 } 00:07:37.818 ] 00:07:37.818 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:38.079 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fed47cb5-47ec-4405-a937-d56fab611df4 00:07:38.079 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:38.079 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:38.079 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fed47cb5-47ec-4405-a937-d56fab611df4 00:07:38.079 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:38.342 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:38.342 14:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 64c9b37b-7bdd-44c3-b0ce-360040d551a1 00:07:38.342 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fed47cb5-47ec-4405-a937-d56fab611df4 00:07:38.603 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:38.865 00:07:38.865 real 0m16.891s 00:07:38.865 user 0m45.767s 00:07:38.865 sys 0m2.843s 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:38.865 ************************************ 00:07:38.865 END TEST lvs_grow_dirty 00:07:38.865 ************************************ 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:38.865 nvmf_trace.0 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.865 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:38.865 rmmod nvme_tcp 00:07:39.126 rmmod nvme_fabrics 00:07:39.126 rmmod nvme_keyring 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 3211070 ']' 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 3211070 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3211070 ']' 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3211070 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3211070 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3211070' 00:07:39.126 killing process with pid 3211070 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3211070 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3211070 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.126 14:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.674 14:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:41.674 00:07:41.674 real 0m43.962s 00:07:41.674 user 1m7.049s 00:07:41.674 sys 0m10.154s 00:07:41.674 14:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.674 14:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.674 ************************************ 00:07:41.674 END TEST nvmf_lvs_grow 00:07:41.674 ************************************ 00:07:41.674 14:21:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:41.674 14:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.674 14:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.674 14:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.674 ************************************ 00:07:41.674 START TEST nvmf_bdev_io_wait 00:07:41.674 ************************************ 00:07:41.674 14:21:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:41.674 * Looking for test storage... 00:07:41.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:41.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.674 --rc genhtml_branch_coverage=1 00:07:41.674 --rc genhtml_function_coverage=1 00:07:41.674 --rc genhtml_legend=1 00:07:41.674 --rc geninfo_all_blocks=1 00:07:41.674 --rc geninfo_unexecuted_blocks=1 00:07:41.674 00:07:41.674 ' 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:41.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.674 --rc genhtml_branch_coverage=1 00:07:41.674 --rc genhtml_function_coverage=1 00:07:41.674 --rc genhtml_legend=1 00:07:41.674 --rc geninfo_all_blocks=1 00:07:41.674 --rc geninfo_unexecuted_blocks=1 00:07:41.674 00:07:41.674 ' 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:41.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.674 --rc genhtml_branch_coverage=1 00:07:41.674 --rc genhtml_function_coverage=1 00:07:41.674 --rc genhtml_legend=1 00:07:41.674 --rc geninfo_all_blocks=1 00:07:41.674 --rc geninfo_unexecuted_blocks=1 00:07:41.674 00:07:41.674 ' 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:41.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.674 --rc genhtml_branch_coverage=1 00:07:41.674 --rc genhtml_function_coverage=1 00:07:41.674 --rc genhtml_legend=1 00:07:41.674 --rc geninfo_all_blocks=1 00:07:41.674 --rc geninfo_unexecuted_blocks=1 00:07:41.674 00:07:41.674 ' 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.674 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:41.675 14:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:48.267 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:48.268 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:48.529 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:48.529 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.530 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:48.530 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:48.530 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.530 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.530 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.530 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.530 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.530 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.530 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:48.530 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:48.530 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.530 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.530 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.530 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.530 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.530 14:21:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:48.530 Found net devices under 0000:31:00.0: cvl_0_0 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:48.530 Found net devices under 0000:31:00.1: cvl_0_1 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:48.530 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:48.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:07:48.792 00:07:48.792 --- 10.0.0.2 ping statistics --- 00:07:48.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.792 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:07:48.792 00:07:48.792 --- 10.0.0.1 ping statistics --- 00:07:48.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.792 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=3216138 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 3216138 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3216138 ']' 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.792 14:21:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.792 [2024-10-14 14:21:29.430586] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:07:48.792 [2024-10-14 14:21:29.430637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.792 [2024-10-14 14:21:29.500957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.053 [2024-10-14 14:21:29.537983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.053 [2024-10-14 14:21:29.538015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.053 [2024-10-14 14:21:29.538023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.053 [2024-10-14 14:21:29.538029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.053 [2024-10-14 14:21:29.538035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.053 [2024-10-14 14:21:29.539589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.053 [2024-10-14 14:21:29.539603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.053 [2024-10-14 14:21:29.539737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.053 [2024-10-14 14:21:29.539737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.624 [2024-10-14 14:21:30.336005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.624 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.886 Malloc0 00:07:49.886 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.887 [2024-10-14 14:21:30.395215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3216208 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3216211 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:49.887 { 00:07:49.887 "params": { 00:07:49.887 "name": "Nvme$subsystem", 00:07:49.887 "trtype": "$TEST_TRANSPORT", 00:07:49.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:49.887 "adrfam": "ipv4", 00:07:49.887 "trsvcid": "$NVMF_PORT", 00:07:49.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:49.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:49.887 "hdgst": ${hdgst:-false}, 00:07:49.887 "ddgst": ${ddgst:-false} 00:07:49.887 }, 00:07:49.887 "method": "bdev_nvme_attach_controller" 00:07:49.887 } 00:07:49.887 EOF 00:07:49.887 )") 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3216213 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3216216 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:49.887 { 00:07:49.887 "params": { 00:07:49.887 "name": "Nvme$subsystem", 00:07:49.887 "trtype": "$TEST_TRANSPORT", 00:07:49.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:49.887 "adrfam": "ipv4", 00:07:49.887 "trsvcid": "$NVMF_PORT", 00:07:49.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:49.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:49.887 "hdgst": ${hdgst:-false}, 00:07:49.887 "ddgst": ${ddgst:-false} 00:07:49.887 }, 00:07:49.887 "method": "bdev_nvme_attach_controller" 00:07:49.887 } 00:07:49.887 EOF 00:07:49.887 )") 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:49.887 { 00:07:49.887 "params": { 00:07:49.887 "name": "Nvme$subsystem", 00:07:49.887 "trtype": "$TEST_TRANSPORT", 00:07:49.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:49.887 "adrfam": "ipv4", 00:07:49.887 "trsvcid": "$NVMF_PORT", 00:07:49.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:49.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:49.887 "hdgst": ${hdgst:-false}, 00:07:49.887 "ddgst": ${ddgst:-false} 00:07:49.887 }, 00:07:49.887 "method": "bdev_nvme_attach_controller" 00:07:49.887 } 00:07:49.887 EOF 00:07:49.887 )") 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:49.887 { 00:07:49.887 "params": { 00:07:49.887 "name": "Nvme$subsystem", 00:07:49.887 "trtype": "$TEST_TRANSPORT", 00:07:49.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:49.887 "adrfam": "ipv4", 00:07:49.887 "trsvcid": "$NVMF_PORT", 00:07:49.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:49.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:49.887 "hdgst": ${hdgst:-false}, 00:07:49.887 "ddgst": ${ddgst:-false} 00:07:49.887 }, 00:07:49.887 "method": "bdev_nvme_attach_controller" 00:07:49.887 } 00:07:49.887 EOF 00:07:49.887 )") 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3216208 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:49.887 "params": { 00:07:49.887 "name": "Nvme1", 00:07:49.887 "trtype": "tcp", 00:07:49.887 "traddr": "10.0.0.2", 00:07:49.887 "adrfam": "ipv4", 00:07:49.887 "trsvcid": "4420", 00:07:49.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:49.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:49.887 "hdgst": false, 00:07:49.887 "ddgst": false 00:07:49.887 }, 00:07:49.887 "method": "bdev_nvme_attach_controller" 00:07:49.887 }' 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:49.887 "params": { 00:07:49.887 "name": "Nvme1", 00:07:49.887 "trtype": "tcp", 00:07:49.887 "traddr": "10.0.0.2", 00:07:49.887 "adrfam": "ipv4", 00:07:49.887 "trsvcid": "4420", 00:07:49.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:49.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:49.887 "hdgst": false, 00:07:49.887 "ddgst": false 00:07:49.887 }, 00:07:49.887 "method": "bdev_nvme_attach_controller" 00:07:49.887 }' 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:49.887 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:49.887 "params": { 00:07:49.887 "name": "Nvme1", 00:07:49.887 "trtype": "tcp", 00:07:49.887 "traddr": "10.0.0.2", 00:07:49.887 "adrfam": "ipv4", 00:07:49.887 "trsvcid": "4420", 00:07:49.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:49.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:49.887 "hdgst": false, 00:07:49.887 "ddgst": false 00:07:49.888 }, 00:07:49.888 "method": "bdev_nvme_attach_controller" 00:07:49.888 }' 00:07:49.888 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:49.888 14:21:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:49.888 "params": { 00:07:49.888 "name": "Nvme1", 00:07:49.888 "trtype": "tcp", 00:07:49.888 "traddr": "10.0.0.2", 00:07:49.888 "adrfam": "ipv4", 00:07:49.888 "trsvcid": "4420", 00:07:49.888 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:49.888 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:49.888 "hdgst": false, 00:07:49.888 "ddgst": false 00:07:49.888 }, 00:07:49.888 "method": "bdev_nvme_attach_controller" 00:07:49.888 }' 00:07:49.888 [2024-10-14 14:21:30.451255] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:07:49.888 [2024-10-14 14:21:30.451308] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:49.888 [2024-10-14 14:21:30.451639] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:07:49.888 [2024-10-14 14:21:30.451686] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:49.888 [2024-10-14 14:21:30.452502] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:07:49.888 [2024-10-14 14:21:30.452550] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:49.888 [2024-10-14 14:21:30.455946] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:07:49.888 [2024-10-14 14:21:30.455992] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:49.888 [2024-10-14 14:21:30.605555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.149 [2024-10-14 14:21:30.634650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.149 [2024-10-14 14:21:30.661173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.149 [2024-10-14 14:21:30.690814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:50.149 [2024-10-14 14:21:30.710785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.149 [2024-10-14 14:21:30.739434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:50.149 [2024-10-14 14:21:30.757118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.149 [2024-10-14 14:21:30.785739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:50.149 Running I/O for 1 seconds... 00:07:50.149 Running I/O for 1 seconds... 00:07:50.410 Running I/O for 1 seconds... 00:07:50.410 Running I/O for 1 seconds... 00:07:51.354 188568.00 IOPS, 736.59 MiB/s 00:07:51.354 Latency(us) 00:07:51.354 [2024-10-14T12:21:32.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.354 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:51.354 Nvme1n1 : 1.00 188191.15 735.12 0.00 0.00 676.91 298.67 1966.08 00:07:51.354 [2024-10-14T12:21:32.081Z] =================================================================================================================== 00:07:51.354 [2024-10-14T12:21:32.081Z] Total : 188191.15 735.12 0.00 0.00 676.91 298.67 1966.08 00:07:51.354 7902.00 IOPS, 30.87 MiB/s 00:07:51.354 Latency(us) 00:07:51.354 [2024-10-14T12:21:32.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.354 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:51.354 Nvme1n1 : 1.02 7914.01 30.91 0.00 0.00 16058.56 7318.19 26978.99 00:07:51.354 [2024-10-14T12:21:32.081Z] =================================================================================================================== 00:07:51.354 [2024-10-14T12:21:32.081Z] Total : 7914.01 30.91 0.00 0.00 16058.56 7318.19 26978.99 00:07:51.354 20094.00 IOPS, 78.49 MiB/s 00:07:51.354 Latency(us) 00:07:51.354 [2024-10-14T12:21:32.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.354 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:51.354 Nvme1n1 : 1.01 20158.09 78.74 0.00 0.00 6333.58 2990.08 17039.36 00:07:51.354 [2024-10-14T12:21:32.081Z] =================================================================================================================== 00:07:51.354 [2024-10-14T12:21:32.081Z] Total : 20158.09 78.74 0.00 0.00 6333.58 2990.08 17039.36 00:07:51.354 14:21:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3216211 00:07:51.354 7558.00 IOPS, 29.52 MiB/s 00:07:51.354 Latency(us) 00:07:51.354 [2024-10-14T12:21:32.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.354 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:51.354 Nvme1n1 : 1.01 7649.65 29.88 0.00 0.00 16689.66 3713.71 39976.96 00:07:51.354 [2024-10-14T12:21:32.081Z] =================================================================================================================== 00:07:51.354 [2024-10-14T12:21:32.081Z] Total : 7649.65 29.88 0.00 0.00 16689.66 3713.71 39976.96 00:07:51.354 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3216213 00:07:51.354 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3216216 00:07:51.354 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.354 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.354 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:51.615 rmmod nvme_tcp 00:07:51.615 rmmod nvme_fabrics 00:07:51.615 rmmod nvme_keyring 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 3216138 ']' 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 3216138 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3216138 ']' 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3216138 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3216138 00:07:51.615 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.616 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.616 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3216138' 00:07:51.616 killing process with pid 3216138 00:07:51.616 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3216138 00:07:51.616 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3216138 00:07:51.616 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:51.616 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:51.616 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:51.616 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:51.616 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:07:51.616 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:51.616 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:07:51.877 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:51.877 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:51.877 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.877 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.877 14:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.792 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:53.792 00:07:53.792 real 0m12.444s 00:07:53.792 user 0m18.220s 00:07:53.792 sys 0m6.811s 00:07:53.792 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.792 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.792 ************************************ 00:07:53.792 END TEST nvmf_bdev_io_wait 00:07:53.792 ************************************ 00:07:53.792 14:21:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:53.792 14:21:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:53.792 14:21:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.792 14:21:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:53.792 ************************************ 00:07:53.792 START TEST nvmf_queue_depth 00:07:53.792 ************************************ 00:07:53.792 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:54.054 * Looking for test storage... 00:07:54.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.054 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:54.054 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:07:54.054 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:54.054 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:54.054 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.054 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.054 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.055 --rc genhtml_branch_coverage=1 00:07:54.055 --rc genhtml_function_coverage=1 00:07:54.055 --rc genhtml_legend=1 00:07:54.055 --rc geninfo_all_blocks=1 00:07:54.055 --rc geninfo_unexecuted_blocks=1 00:07:54.055 00:07:54.055 ' 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.055 --rc genhtml_branch_coverage=1 00:07:54.055 --rc genhtml_function_coverage=1 00:07:54.055 --rc genhtml_legend=1 00:07:54.055 --rc geninfo_all_blocks=1 00:07:54.055 --rc geninfo_unexecuted_blocks=1 00:07:54.055 00:07:54.055 ' 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.055 --rc genhtml_branch_coverage=1 00:07:54.055 --rc genhtml_function_coverage=1 00:07:54.055 --rc genhtml_legend=1 00:07:54.055 --rc geninfo_all_blocks=1 00:07:54.055 --rc geninfo_unexecuted_blocks=1 00:07:54.055 00:07:54.055 ' 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:54.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.055 --rc genhtml_branch_coverage=1 00:07:54.055 --rc genhtml_function_coverage=1 00:07:54.055 --rc genhtml_legend=1 00:07:54.055 --rc geninfo_all_blocks=1 00:07:54.055 --rc geninfo_unexecuted_blocks=1 00:07:54.055 00:07:54.055 ' 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:54.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:54.055 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:54.056 14:21:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:02.461 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:02.462 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:02.462 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:02.462 Found net devices under 0000:31:00.0: cvl_0_0 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:02.462 Found net devices under 0000:31:00.1: cvl_0_1 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.462 14:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:02.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:08:02.462 00:08:02.462 --- 10.0.0.2 ping statistics --- 00:08:02.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.462 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:08:02.462 00:08:02.462 --- 10.0.0.1 ping statistics --- 00:08:02.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.462 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=3220950 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 3220950 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3220950 ']' 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.462 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.462 [2024-10-14 14:21:42.407084] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:08:02.462 [2024-10-14 14:21:42.407135] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.462 [2024-10-14 14:21:42.497266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.462 [2024-10-14 14:21:42.532233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.462 [2024-10-14 14:21:42.532265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.462 [2024-10-14 14:21:42.532273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.463 [2024-10-14 14:21:42.532279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.463 [2024-10-14 14:21:42.532289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.463 [2024-10-14 14:21:42.532887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 [2024-10-14 14:21:42.659886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 Malloc0 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 [2024-10-14 14:21:42.700409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3221083 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3221083 /var/tmp/bdevperf.sock 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3221083 ']' 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:02.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 [2024-10-14 14:21:42.757998] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:08:02.463 [2024-10-14 14:21:42.758059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3221083 ] 00:08:02.463 [2024-10-14 14:21:42.824399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.463 [2024-10-14 14:21:42.867781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.463 14:21:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.463 NVMe0n1 00:08:02.463 14:21:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.463 14:21:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:02.724 Running I/O for 10 seconds... 00:08:04.609 8744.00 IOPS, 34.16 MiB/s [2024-10-14T12:21:46.722Z] 9700.00 IOPS, 37.89 MiB/s [2024-10-14T12:21:47.294Z] 10286.00 IOPS, 40.18 MiB/s [2024-10-14T12:21:48.678Z] 10697.50 IOPS, 41.79 MiB/s [2024-10-14T12:21:49.620Z] 10853.20 IOPS, 42.40 MiB/s [2024-10-14T12:21:50.563Z] 10944.83 IOPS, 42.75 MiB/s [2024-10-14T12:21:51.505Z] 11076.57 IOPS, 43.27 MiB/s [2024-10-14T12:21:52.448Z] 11136.75 IOPS, 43.50 MiB/s [2024-10-14T12:21:53.391Z] 11195.56 IOPS, 43.73 MiB/s [2024-10-14T12:21:53.652Z] 11262.30 IOPS, 43.99 MiB/s 00:08:12.925 Latency(us) 00:08:12.925 [2024-10-14T12:21:53.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.925 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:12.925 Verification LBA range: start 0x0 length 0x4000 00:08:12.925 NVMe0n1 : 10.10 11219.73 43.83 0.00 0.00 90555.38 24685.23 70341.97 00:08:12.925 [2024-10-14T12:21:53.652Z] =================================================================================================================== 00:08:12.925 [2024-10-14T12:21:53.652Z] Total : 11219.73 43.83 0.00 0.00 90555.38 24685.23 70341.97 00:08:12.925 { 00:08:12.925 "results": [ 00:08:12.925 { 00:08:12.925 "job": "NVMe0n1", 00:08:12.925 "core_mask": "0x1", 00:08:12.925 "workload": "verify", 00:08:12.925 "status": "finished", 00:08:12.925 "verify_range": { 00:08:12.925 "start": 0, 00:08:12.925 "length": 16384 00:08:12.925 }, 00:08:12.925 "queue_depth": 1024, 00:08:12.925 "io_size": 4096, 00:08:12.925 "runtime": 10.103456, 00:08:12.925 "iops": 11219.725210858542, 00:08:12.925 "mibps": 43.82705160491618, 00:08:12.925 "io_failed": 0, 00:08:12.925 "io_timeout": 0, 00:08:12.925 "avg_latency_us": 90555.38031087352, 00:08:12.925 "min_latency_us": 24685.226666666666, 00:08:12.925 "max_latency_us": 70341.97333333333 00:08:12.925 } 00:08:12.925 ], 00:08:12.925 "core_count": 1 00:08:12.925 } 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3221083 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3221083 ']' 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3221083 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3221083 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3221083' 00:08:12.925 killing process with pid 3221083 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3221083 00:08:12.925 Received shutdown signal, test time was about 10.000000 seconds 00:08:12.925 00:08:12.925 Latency(us) 00:08:12.925 [2024-10-14T12:21:53.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.925 [2024-10-14T12:21:53.652Z] =================================================================================================================== 00:08:12.925 [2024-10-14T12:21:53.652Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3221083 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:12.925 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:12.925 rmmod nvme_tcp 00:08:12.925 rmmod nvme_fabrics 00:08:12.925 rmmod nvme_keyring 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 3220950 ']' 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 3220950 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3220950 ']' 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3220950 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3220950 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3220950' 00:08:13.186 killing process with pid 3220950 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3220950 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3220950 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.186 14:21:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.733 14:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:15.733 00:08:15.733 real 0m21.427s 00:08:15.733 user 0m24.078s 00:08:15.733 sys 0m6.849s 00:08:15.733 14:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.733 14:21:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.733 ************************************ 00:08:15.733 END TEST nvmf_queue_depth 00:08:15.733 ************************************ 00:08:15.733 14:21:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:15.733 14:21:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:15.733 14:21:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.733 14:21:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.733 ************************************ 00:08:15.733 START TEST nvmf_target_multipath 00:08:15.733 ************************************ 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:15.733 * Looking for test storage... 00:08:15.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.733 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:15.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.733 --rc genhtml_branch_coverage=1 00:08:15.733 --rc genhtml_function_coverage=1 00:08:15.733 --rc genhtml_legend=1 00:08:15.733 --rc geninfo_all_blocks=1 00:08:15.733 --rc geninfo_unexecuted_blocks=1 00:08:15.733 00:08:15.733 ' 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:15.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.734 --rc genhtml_branch_coverage=1 00:08:15.734 --rc genhtml_function_coverage=1 00:08:15.734 --rc genhtml_legend=1 00:08:15.734 --rc geninfo_all_blocks=1 00:08:15.734 --rc geninfo_unexecuted_blocks=1 00:08:15.734 00:08:15.734 ' 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:15.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.734 --rc genhtml_branch_coverage=1 00:08:15.734 --rc genhtml_function_coverage=1 00:08:15.734 --rc genhtml_legend=1 00:08:15.734 --rc geninfo_all_blocks=1 00:08:15.734 --rc geninfo_unexecuted_blocks=1 00:08:15.734 00:08:15.734 ' 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:15.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.734 --rc genhtml_branch_coverage=1 00:08:15.734 --rc genhtml_function_coverage=1 00:08:15.734 --rc genhtml_legend=1 00:08:15.734 --rc geninfo_all_blocks=1 00:08:15.734 --rc geninfo_unexecuted_blocks=1 00:08:15.734 00:08:15.734 ' 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:15.734 14:21:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.879 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:23.880 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:23.880 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:23.880 Found net devices under 0000:31:00.0: cvl_0_0 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:23.880 Found net devices under 0000:31:00.1: cvl_0_1 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:23.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:08:23.880 00:08:23.880 --- 10.0.0.2 ping statistics --- 00:08:23.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.880 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:08:23.880 00:08:23.880 --- 10.0.0.1 ping statistics --- 00:08:23.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.880 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:23.880 only one NIC for nvmf test 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:23.880 rmmod nvme_tcp 00:08:23.880 rmmod nvme_fabrics 00:08:23.880 rmmod nvme_keyring 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:23.880 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:23.881 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:23.881 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:23.881 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:23.881 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:23.881 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:23.881 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:23.881 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:23.881 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.881 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.881 14:22:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:25.266 00:08:25.266 real 0m9.927s 00:08:25.266 user 0m2.185s 00:08:25.266 sys 0m5.678s 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:25.266 ************************************ 00:08:25.266 END TEST nvmf_target_multipath 00:08:25.266 ************************************ 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.266 14:22:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:25.528 ************************************ 00:08:25.528 START TEST nvmf_zcopy 00:08:25.528 ************************************ 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:25.528 * Looking for test storage... 00:08:25.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:25.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.528 --rc genhtml_branch_coverage=1 00:08:25.528 --rc genhtml_function_coverage=1 00:08:25.528 --rc genhtml_legend=1 00:08:25.528 --rc geninfo_all_blocks=1 00:08:25.528 --rc geninfo_unexecuted_blocks=1 00:08:25.528 00:08:25.528 ' 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:25.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.528 --rc genhtml_branch_coverage=1 00:08:25.528 --rc genhtml_function_coverage=1 00:08:25.528 --rc genhtml_legend=1 00:08:25.528 --rc geninfo_all_blocks=1 00:08:25.528 --rc geninfo_unexecuted_blocks=1 00:08:25.528 00:08:25.528 ' 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:25.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.528 --rc genhtml_branch_coverage=1 00:08:25.528 --rc genhtml_function_coverage=1 00:08:25.528 --rc genhtml_legend=1 00:08:25.528 --rc geninfo_all_blocks=1 00:08:25.528 --rc geninfo_unexecuted_blocks=1 00:08:25.528 00:08:25.528 ' 00:08:25.528 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:25.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.528 --rc genhtml_branch_coverage=1 00:08:25.529 --rc genhtml_function_coverage=1 00:08:25.529 --rc genhtml_legend=1 00:08:25.529 --rc geninfo_all_blocks=1 00:08:25.529 --rc geninfo_unexecuted_blocks=1 00:08:25.529 00:08:25.529 ' 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:25.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:25.529 14:22:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:33.667 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:33.667 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:33.667 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:33.668 Found net devices under 0000:31:00.0: cvl_0_0 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:33.668 Found net devices under 0000:31:00.1: cvl_0_1 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:33.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:08:33.668 00:08:33.668 --- 10.0.0.2 ping statistics --- 00:08:33.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.668 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:08:33.668 00:08:33.668 --- 10.0.0.1 ping statistics --- 00:08:33.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.668 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=3231791 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 3231791 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3231791 ']' 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.668 14:22:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.668 [2024-10-14 14:22:13.545879] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:08:33.668 [2024-10-14 14:22:13.545945] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.668 [2024-10-14 14:22:13.636948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.668 [2024-10-14 14:22:13.687147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.668 [2024-10-14 14:22:13.687201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.668 [2024-10-14 14:22:13.687209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.668 [2024-10-14 14:22:13.687216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.668 [2024-10-14 14:22:13.687222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.668 [2024-10-14 14:22:13.688011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.668 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.668 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:33.668 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:33.668 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.668 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.930 [2024-10-14 14:22:14.413337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.930 [2024-10-14 14:22:14.429553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.930 malloc0 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:33.930 { 00:08:33.930 "params": { 00:08:33.930 "name": "Nvme$subsystem", 00:08:33.930 "trtype": "$TEST_TRANSPORT", 00:08:33.930 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:33.930 "adrfam": "ipv4", 00:08:33.930 "trsvcid": "$NVMF_PORT", 00:08:33.930 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:33.930 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:33.930 "hdgst": ${hdgst:-false}, 00:08:33.930 "ddgst": ${ddgst:-false} 00:08:33.930 }, 00:08:33.930 "method": "bdev_nvme_attach_controller" 00:08:33.930 } 00:08:33.930 EOF 00:08:33.930 )") 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:33.930 14:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:33.930 "params": { 00:08:33.930 "name": "Nvme1", 00:08:33.930 "trtype": "tcp", 00:08:33.930 "traddr": "10.0.0.2", 00:08:33.930 "adrfam": "ipv4", 00:08:33.930 "trsvcid": "4420", 00:08:33.930 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:33.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:33.930 "hdgst": false, 00:08:33.930 "ddgst": false 00:08:33.930 }, 00:08:33.930 "method": "bdev_nvme_attach_controller" 00:08:33.930 }' 00:08:33.930 [2024-10-14 14:22:14.528534] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:08:33.930 [2024-10-14 14:22:14.528607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3232137 ] 00:08:33.930 [2024-10-14 14:22:14.596478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.930 [2024-10-14 14:22:14.640346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.191 Running I/O for 10 seconds... 00:08:36.517 7626.00 IOPS, 59.58 MiB/s [2024-10-14T12:22:18.186Z] 8698.00 IOPS, 67.95 MiB/s [2024-10-14T12:22:19.125Z] 9050.33 IOPS, 70.71 MiB/s [2024-10-14T12:22:20.068Z] 9231.25 IOPS, 72.12 MiB/s [2024-10-14T12:22:21.011Z] 9339.60 IOPS, 72.97 MiB/s [2024-10-14T12:22:21.953Z] 9405.17 IOPS, 73.48 MiB/s [2024-10-14T12:22:22.897Z] 9456.43 IOPS, 73.88 MiB/s [2024-10-14T12:22:24.284Z] 9495.00 IOPS, 74.18 MiB/s [2024-10-14T12:22:24.856Z] 9524.89 IOPS, 74.41 MiB/s [2024-10-14T12:22:25.118Z] 9550.40 IOPS, 74.61 MiB/s 00:08:44.391 Latency(us) 00:08:44.391 [2024-10-14T12:22:25.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.391 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:44.391 Verification LBA range: start 0x0 length 0x1000 00:08:44.391 Nvme1n1 : 10.01 9551.81 74.62 0.00 0.00 13349.18 2061.65 27415.89 00:08:44.391 [2024-10-14T12:22:25.118Z] =================================================================================================================== 00:08:44.391 [2024-10-14T12:22:25.118Z] Total : 9551.81 74.62 0.00 0.00 13349.18 2061.65 27415.89 00:08:44.391 14:22:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3234165 00:08:44.391 14:22:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:44.391 14:22:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.391 14:22:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:44.391 14:22:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:44.391 14:22:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:44.391 14:22:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:44.391 14:22:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:44.391 14:22:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:44.391 { 00:08:44.391 "params": { 00:08:44.391 "name": "Nvme$subsystem", 00:08:44.391 "trtype": "$TEST_TRANSPORT", 00:08:44.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:44.391 "adrfam": "ipv4", 00:08:44.391 "trsvcid": "$NVMF_PORT", 00:08:44.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:44.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:44.391 "hdgst": ${hdgst:-false}, 00:08:44.391 "ddgst": ${ddgst:-false} 00:08:44.391 }, 00:08:44.391 "method": "bdev_nvme_attach_controller" 00:08:44.391 } 00:08:44.391 EOF 00:08:44.391 )") 00:08:44.391 [2024-10-14 14:22:24.981300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:24.981326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.391 14:22:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:44.391 14:22:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:44.391 [2024-10-14 14:22:24.989290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:24.989299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.391 14:22:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:44.391 14:22:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:44.391 "params": { 00:08:44.391 "name": "Nvme1", 00:08:44.391 "trtype": "tcp", 00:08:44.391 "traddr": "10.0.0.2", 00:08:44.391 "adrfam": "ipv4", 00:08:44.391 "trsvcid": "4420", 00:08:44.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:44.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:44.391 "hdgst": false, 00:08:44.391 "ddgst": false 00:08:44.391 }, 00:08:44.391 "method": "bdev_nvme_attach_controller" 00:08:44.391 }' 00:08:44.391 [2024-10-14 14:22:24.997309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:24.997317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.391 [2024-10-14 14:22:25.005329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:25.005336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.391 [2024-10-14 14:22:25.013349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:25.013357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.391 [2024-10-14 14:22:25.025380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:25.025388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.391 [2024-10-14 14:22:25.027618] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:08:44.391 [2024-10-14 14:22:25.027662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234165 ] 00:08:44.391 [2024-10-14 14:22:25.033401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:25.033408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.391 [2024-10-14 14:22:25.041420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:25.041428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.391 [2024-10-14 14:22:25.049441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:25.049449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.391 [2024-10-14 14:22:25.057461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:25.057469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.391 [2024-10-14 14:22:25.065482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:25.065489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.391 [2024-10-14 14:22:25.073502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:25.073513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.391 [2024-10-14 14:22:25.081522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:25.081529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.391 [2024-10-14 14:22:25.088199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.391 [2024-10-14 14:22:25.089543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:25.089549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.391 [2024-10-14 14:22:25.097566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.391 [2024-10-14 14:22:25.097573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.392 [2024-10-14 14:22:25.105586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.392 [2024-10-14 14:22:25.105593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.392 [2024-10-14 14:22:25.113606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.392 [2024-10-14 14:22:25.113614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.121627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.121636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.123380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.653 [2024-10-14 14:22:25.129647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.129654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.137674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.137684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.145692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.145703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.153711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.153721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.161729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.161737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.169749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.169756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.177770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.177777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.185789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.185796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.193811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.193817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.201841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.201855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.209856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.209865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.217877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.217888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.225898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.225907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.233918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.233924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.241938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.241945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.249959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.249966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.257979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.257986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.266001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.266010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.274022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.274031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.653 [2024-10-14 14:22:25.282044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.653 [2024-10-14 14:22:25.282053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.654 [2024-10-14 14:22:25.290068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.654 [2024-10-14 14:22:25.290076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.654 [2024-10-14 14:22:25.298098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.654 [2024-10-14 14:22:25.298114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.654 [2024-10-14 14:22:25.306111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.654 [2024-10-14 14:22:25.306118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.654 Running I/O for 5 seconds... 00:08:44.654 [2024-10-14 14:22:25.314128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.654 [2024-10-14 14:22:25.314134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.654 [2024-10-14 14:22:25.324749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.654 [2024-10-14 14:22:25.324765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.654 [2024-10-14 14:22:25.332451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.654 [2024-10-14 14:22:25.332466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.654 [2024-10-14 14:22:25.341463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.654 [2024-10-14 14:22:25.341478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.654 [2024-10-14 14:22:25.350236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.654 [2024-10-14 14:22:25.350251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.654 [2024-10-14 14:22:25.359385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.654 [2024-10-14 14:22:25.359400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.654 [2024-10-14 14:22:25.368026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.654 [2024-10-14 14:22:25.368040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.654 [2024-10-14 14:22:25.376740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.654 [2024-10-14 14:22:25.376754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.385286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.385301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.394384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.394398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.403578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.403592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.412236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.412250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.421349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.421363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.430509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.430525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.439299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.439313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.447991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.448006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.457119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.457135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.465672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.465687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.474283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.474298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.482861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.482875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.491590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.491604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.500288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.500303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.508933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.508948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.518034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.518049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.526947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.526961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.535923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.535937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.544517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.544531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.553230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.553244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.562280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.562294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.571428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.571443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.580441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.580456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.589592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.589606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.598581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.598596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.607791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.607805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.615786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.615800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.624684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.624699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.633672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.633687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.916 [2024-10-14 14:22:25.642176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.916 [2024-10-14 14:22:25.642190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.177 [2024-10-14 14:22:25.650735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.177 [2024-10-14 14:22:25.650750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.177 [2024-10-14 14:22:25.660086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.177 [2024-10-14 14:22:25.660101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.177 [2024-10-14 14:22:25.668654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.177 [2024-10-14 14:22:25.668668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.177 [2024-10-14 14:22:25.677557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.177 [2024-10-14 14:22:25.677571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.177 [2024-10-14 14:22:25.686684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.177 [2024-10-14 14:22:25.686699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.177 [2024-10-14 14:22:25.695713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.177 [2024-10-14 14:22:25.695727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.177 [2024-10-14 14:22:25.705087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.705102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.713659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.713674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.722822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.722837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.730885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.730900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.739815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.739830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.748858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.748873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.757568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.757582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.766747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.766762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.774739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.774754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.783756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.783770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.792148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.792163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.801054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.801073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.810140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.810155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.818515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.818529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.827470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.827484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.836302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.836317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.845226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.845241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.853752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.853767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.862616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.862631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.871770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.871791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.880253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.880267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.888384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.888398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.897186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.897200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.178 [2024-10-14 14:22:25.906396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.178 [2024-10-14 14:22:25.906411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.439 [2024-10-14 14:22:25.915174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.439 [2024-10-14 14:22:25.915189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.439 [2024-10-14 14:22:25.923620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.439 [2024-10-14 14:22:25.923634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.439 [2024-10-14 14:22:25.932569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.439 [2024-10-14 14:22:25.932584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:25.941528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:25.941542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:25.950284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:25.950298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:25.958480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:25.958495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:25.967152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:25.967166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:25.975795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:25.975810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:25.984484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:25.984499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:25.993833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:25.993848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.003141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.003156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.011900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.011915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.019860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.019875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.029151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.029166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.037601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.037619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.046034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.046048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.054818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.054833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.063739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.063754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.072801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.072816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.081549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.081563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.090304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.090319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.099487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.099502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.107841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.107856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.116677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.116691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.125501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.125515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.133969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.133984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.142670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.142684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.151661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.151676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.160235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.160249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.440 [2024-10-14 14:22:26.168793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.440 [2024-10-14 14:22:26.168807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.177406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.177421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.185378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.185392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.194628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.194642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.203239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.203257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.211930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.211945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.220368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.220382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.229533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.229548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.238516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.238530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.247734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.247748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.256276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.256290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.265395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.265409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.273488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.273503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.282158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.282172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.291325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.291338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.299823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.299837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.308617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.308631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 18999.00 IOPS, 148.43 MiB/s [2024-10-14T12:22:26.429Z] [2024-10-14 14:22:26.317302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.317316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.326132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.326146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.335132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.335147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.344214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.344228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.353307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.353321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.362262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.362276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.371412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.371426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.380296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.380311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.388908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.388922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.397689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.397703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.406055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.406073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.414668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.414681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.423594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.423608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.702 [2024-10-14 14:22:26.432040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.702 [2024-10-14 14:22:26.432054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.440655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.440670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.449946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.449960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.458377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.458391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.467378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.467392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.475439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.475453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.484136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.484150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.493239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.493254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.501696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.501710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.510639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.510653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.519516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.519531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.527776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.527791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.536972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.536987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.546204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.546219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.554854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.554869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.563737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.563751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.572870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.572885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.582052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.582072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.590675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.590689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.598773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.598787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.607517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.607532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.616243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.616257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.624941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.624955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.633295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.633310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.641844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.641858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.650787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.650801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.659626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.659641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.668338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.668352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.676990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.677004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.685407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.685421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:45.964 [2024-10-14 14:22:26.694193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:45.964 [2024-10-14 14:22:26.694208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.703116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.703139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.712402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.712417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.721458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.721472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.730033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.730047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.739299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.739314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.748290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.748304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.756721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.756736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.765593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.765607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.774155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.774169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.782875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.782890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.792042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.792057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.801132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.801147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.810226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.810241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.818810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.818824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.827403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.827417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.836116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.836130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.845249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.845263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.853271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.853286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.861834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.861852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.870613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.870627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.878732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.878746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.887389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.887403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.896463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.896477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.905563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.905577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.914820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.914834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.923324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.923338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.932110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.225 [2024-10-14 14:22:26.932123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.225 [2024-10-14 14:22:26.941207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.226 [2024-10-14 14:22:26.941221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.226 [2024-10-14 14:22:26.949929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.226 [2024-10-14 14:22:26.949943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:26.958809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:26.958824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:26.967854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:26.967868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:26.976517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:26.976531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:26.985788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:26.985802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:26.994471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:26.994485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.003228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.003242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.011887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.011902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.020393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.020407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.028816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.028834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.037592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.037607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.046435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.046451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.055100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.055114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.064347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.064362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.073214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.073228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.081834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.081848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.090959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.090973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.099551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.099566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.108795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.108809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.117896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.117910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.125991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.126006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.135023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.135038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.143459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.143475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.152187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.152202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.160792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.160808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.170032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.170047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.178629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.178645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.187271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.187286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.195931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.195950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.204858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.204873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.488 [2024-10-14 14:22:27.213380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.488 [2024-10-14 14:22:27.213394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.222116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.222131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.231375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.231390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.240355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.240370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.248916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.248931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.257951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.257966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.267038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.267053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.276088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.276103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.285209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.285224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.294176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.294191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.302675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.302689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.311199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.311214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 19142.00 IOPS, 149.55 MiB/s [2024-10-14T12:22:27.478Z] [2024-10-14 14:22:27.319792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.319807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.328408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.328422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.337192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.337207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.346025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.346040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.354664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.354679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.363373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.363391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.371923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.371938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.380623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.380638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.389010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.389025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.397888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.397903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.406330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.406345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.414917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.414932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.423290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.423305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.431925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.431939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.439718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.439732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.448826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.448841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.457536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.457551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.466637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.466652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:46.751 [2024-10-14 14:22:27.474525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:46.751 [2024-10-14 14:22:27.474540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.483850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.483865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.491827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.491842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.500542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.500557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.509625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.509639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.518146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.518160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.527344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.527359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.536135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.536149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.545121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.545136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.554158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.554173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.562987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.563001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.571580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.571595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.580903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.580918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.589525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.589540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.598242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.598256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.606679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.606694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.615546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.615560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.624515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.624530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.633839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.633854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.642863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.642878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.651349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.651364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.660468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.660482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.669548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.669562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.677968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.677983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.687017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.687031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.696097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.696112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.704481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.704495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.713600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.713615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.722193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.722208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.730860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.730874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.014 [2024-10-14 14:22:27.740039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.014 [2024-10-14 14:22:27.740054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.281 [2024-10-14 14:22:27.748587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.748602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.282 [2024-10-14 14:22:27.757189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.757204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.282 [2024-10-14 14:22:27.766037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.766052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.282 [2024-10-14 14:22:27.774567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.774582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.282 [2024-10-14 14:22:27.783049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.783069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.282 [2024-10-14 14:22:27.791895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.791909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.282 [2024-10-14 14:22:27.800279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.800294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.282 [2024-10-14 14:22:27.808951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.808966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.282 [2024-10-14 14:22:27.817943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.817957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.282 [2024-10-14 14:22:27.826599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.826614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.282 [2024-10-14 14:22:27.835714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.835728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.282 [2024-10-14 14:22:27.844353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.844367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.282 [2024-10-14 14:22:27.853522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.853537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.282 [2024-10-14 14:22:27.861394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.861408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.282 [2024-10-14 14:22:27.870582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.282 [2024-10-14 14:22:27.870596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.283 [2024-10-14 14:22:27.879135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.283 [2024-10-14 14:22:27.879149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.283 [2024-10-14 14:22:27.887751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.283 [2024-10-14 14:22:27.887765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.283 [2024-10-14 14:22:27.896202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.283 [2024-10-14 14:22:27.896217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.283 [2024-10-14 14:22:27.905205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.283 [2024-10-14 14:22:27.905220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.283 [2024-10-14 14:22:27.914108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.283 [2024-10-14 14:22:27.914122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.283 [2024-10-14 14:22:27.923270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.283 [2024-10-14 14:22:27.923284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.283 [2024-10-14 14:22:27.932051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.283 [2024-10-14 14:22:27.932069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.283 [2024-10-14 14:22:27.940770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.283 [2024-10-14 14:22:27.940784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.283 [2024-10-14 14:22:27.949301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.283 [2024-10-14 14:22:27.949315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.283 [2024-10-14 14:22:27.958034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.283 [2024-10-14 14:22:27.958048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.283 [2024-10-14 14:22:27.967122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.283 [2024-10-14 14:22:27.967136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.284 [2024-10-14 14:22:27.976220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.284 [2024-10-14 14:22:27.976234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.284 [2024-10-14 14:22:27.984828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.284 [2024-10-14 14:22:27.984842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.284 [2024-10-14 14:22:27.993694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.284 [2024-10-14 14:22:27.993708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.284 [2024-10-14 14:22:28.002113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.284 [2024-10-14 14:22:28.002127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.011106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.011121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.020126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.020144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.028890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.028905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.038113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.038127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.047119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.047133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.056069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.056084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.064588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.064602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.072927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.072941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.081941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.081956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.091090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.091104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.099727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.099742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.108302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.108316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.117624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.117638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.125470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.125484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.134521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.134535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.143614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.143628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.152157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.152171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.161240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.161254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.170411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.170425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.179204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.179218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.187879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.187896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.196389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.196403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.205209] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.205224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.213158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.213172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.221976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.221990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.231001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.231015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.240168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.240182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.249230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.249244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.257775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.257790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.266846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.266860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.546 [2024-10-14 14:22:28.275661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.546 [2024-10-14 14:22:28.275674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.284167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.284181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.293292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.293306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.301249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.301262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.310149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.310163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 19195.67 IOPS, 149.97 MiB/s [2024-10-14T12:22:28.534Z] [2024-10-14 14:22:28.319810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.319824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.328406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.328419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.337207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.337220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.345805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.345819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.354549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.354569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.362994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.363009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.372295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.372309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.381181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.381196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.389810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.389824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.398187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.398201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.407214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.407229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.416074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.416088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.424661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.424675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.433765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.433779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.442874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.442888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.451971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.451985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.461009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.461023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.469397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.469411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.477968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.477982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.486652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.486666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.495401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.495416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.504539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.504553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.513113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.513127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.522181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.522195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.807 [2024-10-14 14:22:28.530624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.807 [2024-10-14 14:22:28.530638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.539665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.539679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.548423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.548437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.557460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.557474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.565981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.565995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.574752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.574766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.583339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.583353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.592478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.592492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.601508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.601522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.610212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.610226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.619077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.619090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.627847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.627861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.636582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.636596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.645805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.645820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.654367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.654381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.663294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.663307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.672182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.672196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.680802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.680816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.689929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.689943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.698996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.699011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.707803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.707817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.716855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.716869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.725401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.725415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.733891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.733905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.742378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.742392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.750862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.750876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.759726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.759740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.768227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.768242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.776649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.776663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.785780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.785795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.069 [2024-10-14 14:22:28.794239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.069 [2024-10-14 14:22:28.794253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.803181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.803195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.812247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.812261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.820052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.820071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.829091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.829105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.837998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.838013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.846498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.846512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.854916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.854931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.864366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.864381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.873437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.873452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.882597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.882611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.891701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.891715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.900870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.900885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.909789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.909804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.918776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.918791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.927831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.927845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.936240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.936254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.944906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.944920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.953733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.953747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.962257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.962272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.971318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.971332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.979493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.979507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.988299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.988313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:28.996711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:28.996725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:29.005781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:29.005796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:29.014504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:29.014518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:29.023571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:29.023586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:29.033119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:29.033133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:29.041721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:29.041735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:29.050382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:29.050396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.331 [2024-10-14 14:22:29.059113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.331 [2024-10-14 14:22:29.059127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.067855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.067870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.077135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.077149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.086273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.086288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.094796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.094810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.103396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.103411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.111923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.111937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.120464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.120478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.129713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.129728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.137779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.137793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.146607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.146622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.155496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.155511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.164576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.164591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.173119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.173134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.181925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.181943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.191238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.191254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.200437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.200452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.209134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.209149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.218072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.218087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.226684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.226698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.235407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.235421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.244639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.244654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.253326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.253340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.266775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.266790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.274623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.274638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.283500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.283515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.292628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.292643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.301416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.301431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.310011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.310026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.594 [2024-10-14 14:22:29.318597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.594 [2024-10-14 14:22:29.318612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 19208.75 IOPS, 150.07 MiB/s [2024-10-14T12:22:29.583Z] [2024-10-14 14:22:29.327075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.327090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.335934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.335949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.344950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.344965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.353616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.353635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.363090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.363105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.371533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.371547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.379979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.379994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.388788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.388803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.396766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.396780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.406016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.406030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.414531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.414546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.423158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.423172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.431897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.431912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.440614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.440629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.449632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.449646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.458249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.458264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.466679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.466694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.475046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.475066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.483616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.483631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.492172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.492187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.501081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.501096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.510304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.510319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.519097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.519115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.527759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.527773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.536405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.536420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.545611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.545626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.555035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.555049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.563958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.563972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.572568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.572583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.856 [2024-10-14 14:22:29.581364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.856 [2024-10-14 14:22:29.581378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.589663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.589678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.598585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.598598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.607108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.607122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.616402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.616416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.624346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.624360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.633307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.633321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.641745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.641760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.650612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.650626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.659741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.659754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.668955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.668969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.676833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.676847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.685942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.685956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.694277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.694291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.702771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.702785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.711833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.711847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.720308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.720322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.729024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.729038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.737971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.737985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.747174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.747188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.755770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.755785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.764386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.764400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.772846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.772860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.781466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.781480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.790341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.790355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.799515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.799528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.808646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.808660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.817533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.817547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.826060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.826078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.835172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.835186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.118 [2024-10-14 14:22:29.843687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.118 [2024-10-14 14:22:29.843701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.852299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.852314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.860811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.860825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.869778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.869793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.878488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.878502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.887246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.887260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.895751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.895765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.904467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.904481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.913611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.913625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.921539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.921553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.930403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.930418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.939342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.939357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.948046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.948060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.957728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.957742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.965805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.965820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.974092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.974106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.982936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.982950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:29.991978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:29.991992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:30.001098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:30.001113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:30.009892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:30.009908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:30.018758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:30.018773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:30.026753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:30.026767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:30.035867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:30.035882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:30.045030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.380 [2024-10-14 14:22:30.045045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.380 [2024-10-14 14:22:30.053458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.381 [2024-10-14 14:22:30.053472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.381 [2024-10-14 14:22:30.062326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.381 [2024-10-14 14:22:30.062340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.381 [2024-10-14 14:22:30.070870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.381 [2024-10-14 14:22:30.070884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.381 [2024-10-14 14:22:30.079828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.381 [2024-10-14 14:22:30.079842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.381 [2024-10-14 14:22:30.088541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.381 [2024-10-14 14:22:30.088556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.381 [2024-10-14 14:22:30.097511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.381 [2024-10-14 14:22:30.097525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.381 [2024-10-14 14:22:30.106752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.381 [2024-10-14 14:22:30.106768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.115155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.115170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.123968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.123982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.132957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.132971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.141652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.141666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.150357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.150371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.159231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.159245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.167773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.167787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.176653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.176667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.184951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.184965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.193437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.193451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.201980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.201994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.210552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.210567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.219514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.219527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.227993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.228007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.236648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.236662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.245390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.245404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.254187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.254202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.262863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.262877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.271798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.271813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.280257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.280271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.642 [2024-10-14 14:22:30.289187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.642 [2024-10-14 14:22:30.289201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.643 [2024-10-14 14:22:30.297700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.643 [2024-10-14 14:22:30.297714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.643 [2024-10-14 14:22:30.306117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.643 [2024-10-14 14:22:30.306132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.643 [2024-10-14 14:22:30.314667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.643 [2024-10-14 14:22:30.314682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.643 19208.00 IOPS, 150.06 MiB/s [2024-10-14T12:22:30.370Z] [2024-10-14 14:22:30.323680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.643 [2024-10-14 14:22:30.323694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.643 [2024-10-14 14:22:30.329285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.643 [2024-10-14 14:22:30.329299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.643 00:08:49.643 Latency(us) 00:08:49.643 [2024-10-14T12:22:30.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.643 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:49.643 Nvme1n1 : 5.01 19209.42 150.07 0.00 0.00 6657.06 2484.91 14854.83 00:08:49.643 [2024-10-14T12:22:30.370Z] =================================================================================================================== 00:08:49.643 [2024-10-14T12:22:30.370Z] Total : 19209.42 150.07 0.00 0.00 6657.06 2484.91 14854.83 00:08:49.643 [2024-10-14 14:22:30.337318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.643 [2024-10-14 14:22:30.337329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.643 [2024-10-14 14:22:30.345323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.643 [2024-10-14 14:22:30.345333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.643 [2024-10-14 14:22:30.353345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.643 [2024-10-14 14:22:30.353354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.643 [2024-10-14 14:22:30.361365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.643 [2024-10-14 14:22:30.361376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.643 [2024-10-14 14:22:30.369385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.643 [2024-10-14 14:22:30.369394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.904 [2024-10-14 14:22:30.377404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.904 [2024-10-14 14:22:30.377413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.904 [2024-10-14 14:22:30.385423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.904 [2024-10-14 14:22:30.385432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.904 [2024-10-14 14:22:30.393443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.904 [2024-10-14 14:22:30.393451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.904 [2024-10-14 14:22:30.401463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.904 [2024-10-14 14:22:30.401471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.904 [2024-10-14 14:22:30.409484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.904 [2024-10-14 14:22:30.409492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.904 [2024-10-14 14:22:30.417507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.904 [2024-10-14 14:22:30.417514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.904 [2024-10-14 14:22:30.425528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.904 [2024-10-14 14:22:30.425537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.904 [2024-10-14 14:22:30.433546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.904 [2024-10-14 14:22:30.433553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.904 [2024-10-14 14:22:30.441569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.904 [2024-10-14 14:22:30.441578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3234165) - No such process 00:08:49.904 14:22:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3234165 00:08:49.904 14:22:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.904 14:22:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.904 14:22:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.904 14:22:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.904 14:22:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:49.904 14:22:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.905 14:22:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.905 delay0 00:08:49.905 14:22:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.905 14:22:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:49.905 14:22:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.905 14:22:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:49.905 14:22:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.905 14:22:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:49.905 [2024-10-14 14:22:30.532785] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:58.047 Initializing NVMe Controllers 00:08:58.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:58.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:58.047 Initialization complete. Launching workers. 00:08:58.047 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 232, failed: 33585 00:08:58.047 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 33697, failed to submit 120 00:08:58.047 success 33612, unsuccessful 85, failed 0 00:08:58.047 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:58.047 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:58.047 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:58.047 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:58.047 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:58.047 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:58.047 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.047 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:58.047 rmmod nvme_tcp 00:08:58.047 rmmod nvme_fabrics 00:08:58.047 rmmod nvme_keyring 00:08:58.047 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:58.047 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 3231791 ']' 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 3231791 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3231791 ']' 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3231791 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3231791 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3231791' 00:08:58.048 killing process with pid 3231791 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3231791 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3231791 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.048 14:22:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.433 14:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:59.433 00:08:59.433 real 0m33.913s 00:08:59.433 user 0m45.480s 00:08:59.433 sys 0m11.364s 00:08:59.433 14:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.433 14:22:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.433 ************************************ 00:08:59.433 END TEST nvmf_zcopy 00:08:59.433 ************************************ 00:08:59.433 14:22:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:59.433 14:22:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:59.433 14:22:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.433 14:22:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.433 ************************************ 00:08:59.433 START TEST nvmf_nmic 00:08:59.433 ************************************ 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:59.433 * Looking for test storage... 00:08:59.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.433 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:59.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.696 --rc genhtml_branch_coverage=1 00:08:59.696 --rc genhtml_function_coverage=1 00:08:59.696 --rc genhtml_legend=1 00:08:59.696 --rc geninfo_all_blocks=1 00:08:59.696 --rc geninfo_unexecuted_blocks=1 00:08:59.696 00:08:59.696 ' 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:59.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.696 --rc genhtml_branch_coverage=1 00:08:59.696 --rc genhtml_function_coverage=1 00:08:59.696 --rc genhtml_legend=1 00:08:59.696 --rc geninfo_all_blocks=1 00:08:59.696 --rc geninfo_unexecuted_blocks=1 00:08:59.696 00:08:59.696 ' 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:59.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.696 --rc genhtml_branch_coverage=1 00:08:59.696 --rc genhtml_function_coverage=1 00:08:59.696 --rc genhtml_legend=1 00:08:59.696 --rc geninfo_all_blocks=1 00:08:59.696 --rc geninfo_unexecuted_blocks=1 00:08:59.696 00:08:59.696 ' 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:59.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.696 --rc genhtml_branch_coverage=1 00:08:59.696 --rc genhtml_function_coverage=1 00:08:59.696 --rc genhtml_legend=1 00:08:59.696 --rc geninfo_all_blocks=1 00:08:59.696 --rc geninfo_unexecuted_blocks=1 00:08:59.696 00:08:59.696 ' 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:59.696 14:22:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:07.864 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:07.864 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:07.864 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:07.865 Found net devices under 0000:31:00.0: cvl_0_0 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:07.865 Found net devices under 0000:31:00.1: cvl_0_1 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:07.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:09:07.865 00:09:07.865 --- 10.0.0.2 ping statistics --- 00:09:07.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.865 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:09:07.865 00:09:07.865 --- 10.0.0.1 ping statistics --- 00:09:07.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.865 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=3240920 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 3240920 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3240920 ']' 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.865 14:22:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.865 [2024-10-14 14:22:47.723280] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:09:07.865 [2024-10-14 14:22:47.723348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.865 [2024-10-14 14:22:47.797810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.865 [2024-10-14 14:22:47.843441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.865 [2024-10-14 14:22:47.843482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.865 [2024-10-14 14:22:47.843490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.865 [2024-10-14 14:22:47.843500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.865 [2024-10-14 14:22:47.843506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.865 [2024-10-14 14:22:47.845168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.865 [2024-10-14 14:22:47.845291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.865 [2024-10-14 14:22:47.845454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.865 [2024-10-14 14:22:47.845454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.865 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.865 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:07.865 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:07.865 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:07.865 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.865 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.865 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:07.865 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.865 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:07.865 [2024-10-14 14:22:48.576972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.865 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.865 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:07.865 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.865 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.127 Malloc0 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.127 [2024-10-14 14:22:48.649367] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:08.127 test case1: single bdev can't be used in multiple subsystems 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.127 [2024-10-14 14:22:48.685292] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:08.127 [2024-10-14 14:22:48.685311] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:08.127 [2024-10-14 14:22:48.685318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.127 request: 00:09:08.127 { 00:09:08.127 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:08.127 "namespace": { 00:09:08.127 "bdev_name": "Malloc0", 00:09:08.127 "no_auto_visible": false 00:09:08.127 }, 00:09:08.127 "method": "nvmf_subsystem_add_ns", 00:09:08.127 "req_id": 1 00:09:08.127 } 00:09:08.127 Got JSON-RPC error response 00:09:08.127 response: 00:09:08.127 { 00:09:08.127 "code": -32602, 00:09:08.127 "message": "Invalid parameters" 00:09:08.127 } 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:08.127 Adding namespace failed - expected result. 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:08.127 test case2: host connect to nvmf target in multiple paths 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.127 [2024-10-14 14:22:48.697446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.127 14:22:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.514 14:22:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:11.429 14:22:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.429 14:22:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:11.429 14:22:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.429 14:22:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:11.429 14:22:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:13.340 14:22:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:13.340 14:22:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:13.340 14:22:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.340 14:22:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:13.340 14:22:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.340 14:22:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:13.340 14:22:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:13.340 [global] 00:09:13.340 thread=1 00:09:13.340 invalidate=1 00:09:13.340 rw=write 00:09:13.340 time_based=1 00:09:13.340 runtime=1 00:09:13.340 ioengine=libaio 00:09:13.340 direct=1 00:09:13.340 bs=4096 00:09:13.340 iodepth=1 00:09:13.340 norandommap=0 00:09:13.340 numjobs=1 00:09:13.340 00:09:13.340 verify_dump=1 00:09:13.340 verify_backlog=512 00:09:13.340 verify_state_save=0 00:09:13.340 do_verify=1 00:09:13.340 verify=crc32c-intel 00:09:13.340 [job0] 00:09:13.340 filename=/dev/nvme0n1 00:09:13.340 Could not set queue depth (nvme0n1) 00:09:13.601 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.601 fio-3.35 00:09:13.601 Starting 1 thread 00:09:14.544 00:09:14.544 job0: (groupid=0, jobs=1): err= 0: pid=3242463: Mon Oct 14 14:22:55 2024 00:09:14.544 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:14.544 slat (nsec): min=27084, max=64085, avg=27911.23, stdev=2818.57 00:09:14.544 clat (usec): min=646, max=1219, avg=933.84, stdev=84.89 00:09:14.545 lat (usec): min=674, max=1246, avg=961.75, stdev=84.77 00:09:14.545 clat percentiles (usec): 00:09:14.545 | 1.00th=[ 709], 5.00th=[ 799], 10.00th=[ 824], 20.00th=[ 881], 00:09:14.545 | 30.00th=[ 906], 40.00th=[ 922], 50.00th=[ 930], 60.00th=[ 947], 00:09:14.545 | 70.00th=[ 963], 80.00th=[ 996], 90.00th=[ 1057], 95.00th=[ 1074], 00:09:14.545 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1221], 99.95th=[ 1221], 00:09:14.545 | 99.99th=[ 1221] 00:09:14.545 write: IOPS=769, BW=3077KiB/s (3151kB/s)(3080KiB/1001msec); 0 zone resets 00:09:14.545 slat (usec): min=9, max=26411, avg=64.95, stdev=950.76 00:09:14.545 clat (usec): min=269, max=797, avg=582.27, stdev=98.29 00:09:14.545 lat (usec): min=278, max=27014, avg=647.21, stdev=957.12 00:09:14.545 clat percentiles (usec): 00:09:14.545 | 1.00th=[ 343], 5.00th=[ 400], 10.00th=[ 437], 20.00th=[ 494], 00:09:14.545 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 611], 00:09:14.545 | 70.00th=[ 652], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 725], 00:09:14.545 | 99.00th=[ 758], 99.50th=[ 783], 99.90th=[ 799], 99.95th=[ 799], 00:09:14.545 | 99.99th=[ 799] 00:09:14.545 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:14.545 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:14.545 lat (usec) : 500=12.87%, 750=47.11%, 1000=32.22% 00:09:14.545 lat (msec) : 2=7.80% 00:09:14.545 cpu : usr=3.40%, sys=4.20%, ctx=1286, majf=0, minf=1 00:09:14.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.545 issued rwts: total=512,770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.545 00:09:14.545 Run status group 0 (all jobs): 00:09:14.545 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:14.545 WRITE: bw=3077KiB/s (3151kB/s), 3077KiB/s-3077KiB/s (3151kB/s-3151kB/s), io=3080KiB (3154kB), run=1001-1001msec 00:09:14.545 00:09:14.545 Disk stats (read/write): 00:09:14.545 nvme0n1: ios=540/607, merge=0/0, ticks=1427/289, in_queue=1716, util=98.80% 00:09:14.545 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.805 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.805 rmmod nvme_tcp 00:09:14.805 rmmod nvme_fabrics 00:09:14.805 rmmod nvme_keyring 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 3240920 ']' 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 3240920 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3240920 ']' 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3240920 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3240920 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3240920' 00:09:15.066 killing process with pid 3240920 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3240920 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3240920 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.066 14:22:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.614 14:22:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:17.614 00:09:17.614 real 0m17.830s 00:09:17.614 user 0m49.643s 00:09:17.614 sys 0m6.441s 00:09:17.614 14:22:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.614 14:22:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:17.614 ************************************ 00:09:17.614 END TEST nvmf_nmic 00:09:17.614 ************************************ 00:09:17.614 14:22:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:17.614 14:22:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:17.614 14:22:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.614 14:22:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:17.614 ************************************ 00:09:17.614 START TEST nvmf_fio_target 00:09:17.614 ************************************ 00:09:17.614 14:22:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:17.614 * Looking for test storage... 00:09:17.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:17.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.614 --rc genhtml_branch_coverage=1 00:09:17.614 --rc genhtml_function_coverage=1 00:09:17.614 --rc genhtml_legend=1 00:09:17.614 --rc geninfo_all_blocks=1 00:09:17.614 --rc geninfo_unexecuted_blocks=1 00:09:17.614 00:09:17.614 ' 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:17.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.614 --rc genhtml_branch_coverage=1 00:09:17.614 --rc genhtml_function_coverage=1 00:09:17.614 --rc genhtml_legend=1 00:09:17.614 --rc geninfo_all_blocks=1 00:09:17.614 --rc geninfo_unexecuted_blocks=1 00:09:17.614 00:09:17.614 ' 00:09:17.614 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:17.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.615 --rc genhtml_branch_coverage=1 00:09:17.615 --rc genhtml_function_coverage=1 00:09:17.615 --rc genhtml_legend=1 00:09:17.615 --rc geninfo_all_blocks=1 00:09:17.615 --rc geninfo_unexecuted_blocks=1 00:09:17.615 00:09:17.615 ' 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:17.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.615 --rc genhtml_branch_coverage=1 00:09:17.615 --rc genhtml_function_coverage=1 00:09:17.615 --rc genhtml_legend=1 00:09:17.615 --rc geninfo_all_blocks=1 00:09:17.615 --rc geninfo_unexecuted_blocks=1 00:09:17.615 00:09:17.615 ' 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:17.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:17.615 14:22:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:25.758 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:25.758 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:25.758 Found net devices under 0000:31:00.0: cvl_0_0 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:25.758 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:25.759 Found net devices under 0000:31:00.1: cvl_0_1 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:25.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:09:25.759 00:09:25.759 --- 10.0.0.2 ping statistics --- 00:09:25.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.759 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:09:25.759 00:09:25.759 --- 10.0.0.1 ping statistics --- 00:09:25.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.759 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=3247163 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 3247163 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3247163 ']' 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.759 14:23:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.759 [2024-10-14 14:23:05.630364] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:09:25.759 [2024-10-14 14:23:05.630430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.759 [2024-10-14 14:23:05.703821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.759 [2024-10-14 14:23:05.746950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.759 [2024-10-14 14:23:05.746986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.759 [2024-10-14 14:23:05.746994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.759 [2024-10-14 14:23:05.747000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.759 [2024-10-14 14:23:05.747006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.759 [2024-10-14 14:23:05.748655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.759 [2024-10-14 14:23:05.748778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.759 [2024-10-14 14:23:05.748942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.759 [2024-10-14 14:23:05.748942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.759 14:23:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.759 14:23:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:25.759 14:23:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:25.759 14:23:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:25.759 14:23:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.759 14:23:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.759 14:23:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:26.020 [2024-10-14 14:23:06.636691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.020 14:23:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.281 14:23:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:26.281 14:23:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.542 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:26.542 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.542 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:26.542 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.802 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:26.802 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:27.063 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.323 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:27.323 14:23:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.323 14:23:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:27.323 14:23:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.583 14:23:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:27.583 14:23:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:27.843 14:23:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:27.843 14:23:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:27.843 14:23:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.104 14:23:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:28.104 14:23:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.366 14:23:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.366 [2024-10-14 14:23:09.062757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.366 14:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:28.626 14:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:28.887 14:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:30.375 14:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:30.375 14:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:30.375 14:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.375 14:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:30.375 14:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:30.375 14:23:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:32.348 14:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:32.348 14:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:32.348 14:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.348 14:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:32.348 14:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.348 14:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:32.348 14:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:32.348 [global] 00:09:32.348 thread=1 00:09:32.348 invalidate=1 00:09:32.348 rw=write 00:09:32.348 time_based=1 00:09:32.348 runtime=1 00:09:32.348 ioengine=libaio 00:09:32.348 direct=1 00:09:32.348 bs=4096 00:09:32.348 iodepth=1 00:09:32.348 norandommap=0 00:09:32.348 numjobs=1 00:09:32.348 00:09:32.348 verify_dump=1 00:09:32.348 verify_backlog=512 00:09:32.348 verify_state_save=0 00:09:32.348 do_verify=1 00:09:32.348 verify=crc32c-intel 00:09:32.348 [job0] 00:09:32.348 filename=/dev/nvme0n1 00:09:32.348 [job1] 00:09:32.348 filename=/dev/nvme0n2 00:09:32.348 [job2] 00:09:32.348 filename=/dev/nvme0n3 00:09:32.348 [job3] 00:09:32.348 filename=/dev/nvme0n4 00:09:32.632 Could not set queue depth (nvme0n1) 00:09:32.632 Could not set queue depth (nvme0n2) 00:09:32.632 Could not set queue depth (nvme0n3) 00:09:32.632 Could not set queue depth (nvme0n4) 00:09:32.900 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.900 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.900 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.900 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.900 fio-3.35 00:09:32.900 Starting 4 threads 00:09:34.286 00:09:34.286 job0: (groupid=0, jobs=1): err= 0: pid=3248811: Mon Oct 14 14:23:14 2024 00:09:34.286 read: IOPS=23, BW=95.1KiB/s (97.4kB/s)(96.0KiB/1009msec) 00:09:34.286 slat (nsec): min=8314, max=29090, avg=22636.96, stdev=7100.42 00:09:34.286 clat (usec): min=634, max=42005, avg=28276.29, stdev=19528.76 00:09:34.286 lat (usec): min=642, max=42031, avg=28298.93, stdev=19533.64 00:09:34.286 clat percentiles (usec): 00:09:34.286 | 1.00th=[ 635], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 1172], 00:09:34.286 | 30.00th=[ 2900], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:09:34.286 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:34.286 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:34.286 | 99.99th=[42206] 00:09:34.286 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:09:34.286 slat (nsec): min=9988, max=53438, avg=31378.89, stdev=8781.71 00:09:34.286 clat (usec): min=219, max=3440, avg=604.16, stdev=210.27 00:09:34.286 lat (usec): min=234, max=3474, avg=635.54, stdev=212.02 00:09:34.286 clat percentiles (usec): 00:09:34.286 | 1.00th=[ 330], 5.00th=[ 383], 10.00th=[ 437], 20.00th=[ 486], 00:09:34.286 | 30.00th=[ 523], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 627], 00:09:34.286 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 783], 00:09:34.286 | 99.00th=[ 840], 99.50th=[ 865], 99.90th=[ 3458], 99.95th=[ 3458], 00:09:34.286 | 99.99th=[ 3458] 00:09:34.286 bw ( KiB/s): min= 4096, max= 4096, per=50.50%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.286 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.286 lat (usec) : 250=0.37%, 500=22.20%, 750=64.93%, 1000=8.40% 00:09:34.286 lat (msec) : 2=0.56%, 4=0.56%, 50=2.99% 00:09:34.286 cpu : usr=1.39%, sys=0.99%, ctx=538, majf=0, minf=1 00:09:34.286 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.286 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.286 job1: (groupid=0, jobs=1): err= 0: pid=3248812: Mon Oct 14 14:23:14 2024 00:09:34.286 read: IOPS=57, BW=232KiB/s (237kB/s)(232KiB/1001msec) 00:09:34.286 slat (nsec): min=7165, max=36352, avg=23293.36, stdev=7401.82 00:09:34.286 clat (usec): min=645, max=42087, avg=11394.76, stdev=17801.31 00:09:34.286 lat (usec): min=669, max=42114, avg=11418.05, stdev=17802.75 00:09:34.286 clat percentiles (usec): 00:09:34.286 | 1.00th=[ 644], 5.00th=[ 775], 10.00th=[ 816], 20.00th=[ 889], 00:09:34.286 | 30.00th=[ 922], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:09:34.286 | 70.00th=[ 1156], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:09:34.286 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:34.286 | 99.99th=[42206] 00:09:34.286 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:34.286 slat (nsec): min=9808, max=68583, avg=32938.40, stdev=9064.82 00:09:34.286 clat (usec): min=230, max=929, avg=619.94, stdev=126.42 00:09:34.286 lat (usec): min=267, max=973, avg=652.88, stdev=129.40 00:09:34.286 clat percentiles (usec): 00:09:34.286 | 1.00th=[ 351], 5.00th=[ 404], 10.00th=[ 449], 20.00th=[ 506], 00:09:34.286 | 30.00th=[ 553], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:09:34.286 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 807], 00:09:34.286 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[ 930], 99.95th=[ 930], 00:09:34.286 | 99.99th=[ 930] 00:09:34.286 bw ( KiB/s): min= 4096, max= 4096, per=50.50%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.286 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.286 lat (usec) : 250=0.18%, 500=17.19%, 750=58.07%, 1000=20.35% 00:09:34.286 lat (msec) : 2=1.40%, 4=0.18%, 50=2.63% 00:09:34.286 cpu : usr=1.30%, sys=2.10%, ctx=571, majf=0, minf=1 00:09:34.286 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.286 issued rwts: total=58,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.286 job2: (groupid=0, jobs=1): err= 0: pid=3248813: Mon Oct 14 14:23:14 2024 00:09:34.286 read: IOPS=18, BW=75.2KiB/s (77.1kB/s)(76.0KiB/1010msec) 00:09:34.286 slat (nsec): min=10474, max=29773, avg=27755.00, stdev=4199.63 00:09:34.286 clat (usec): min=813, max=42025, avg=35247.44, stdev=15244.80 00:09:34.286 lat (usec): min=824, max=42054, avg=35275.19, stdev=15247.02 00:09:34.286 clat percentiles (usec): 00:09:34.286 | 1.00th=[ 816], 5.00th=[ 816], 10.00th=[ 971], 20.00th=[41157], 00:09:34.286 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:09:34.286 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:34.286 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:34.286 | 99.99th=[42206] 00:09:34.286 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:09:34.286 slat (nsec): min=9783, max=58537, avg=33364.97, stdev=9846.47 00:09:34.286 clat (usec): min=241, max=967, avg=621.36, stdev=121.82 00:09:34.286 lat (usec): min=253, max=1002, avg=654.73, stdev=125.99 00:09:34.286 clat percentiles (usec): 00:09:34.286 | 1.00th=[ 326], 5.00th=[ 408], 10.00th=[ 461], 20.00th=[ 519], 00:09:34.286 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 660], 00:09:34.286 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 775], 95.00th=[ 807], 00:09:34.286 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 971], 99.95th=[ 971], 00:09:34.286 | 99.99th=[ 971] 00:09:34.286 bw ( KiB/s): min= 4096, max= 4096, per=50.50%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.286 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.286 lat (usec) : 250=0.19%, 500=16.38%, 750=66.67%, 1000=13.56% 00:09:34.286 lat (msec) : 2=0.19%, 50=3.01% 00:09:34.286 cpu : usr=1.78%, sys=1.39%, ctx=532, majf=0, minf=1 00:09:34.286 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.286 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.286 job3: (groupid=0, jobs=1): err= 0: pid=3248814: Mon Oct 14 14:23:14 2024 00:09:34.286 read: IOPS=15, BW=63.4KiB/s (65.0kB/s)(64.0KiB/1009msec) 00:09:34.286 slat (nsec): min=26893, max=27879, avg=27302.62, stdev=245.51 00:09:34.286 clat (usec): min=965, max=42093, avg=39317.93, stdev=10230.56 00:09:34.286 lat (usec): min=992, max=42120, avg=39345.24, stdev=10230.48 00:09:34.286 clat percentiles (usec): 00:09:34.286 | 1.00th=[ 963], 5.00th=[ 963], 10.00th=[41157], 20.00th=[41681], 00:09:34.286 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:34.286 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:34.286 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:34.286 | 99.99th=[42206] 00:09:34.286 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:09:34.286 slat (nsec): min=9613, max=55736, avg=31545.12, stdev=9332.24 00:09:34.286 clat (usec): min=179, max=966, avg=701.39, stdev=129.63 00:09:34.286 lat (usec): min=190, max=1000, avg=732.93, stdev=133.73 00:09:34.286 clat percentiles (usec): 00:09:34.286 | 1.00th=[ 306], 5.00th=[ 482], 10.00th=[ 519], 20.00th=[ 603], 00:09:34.286 | 30.00th=[ 652], 40.00th=[ 685], 50.00th=[ 717], 60.00th=[ 750], 00:09:34.286 | 70.00th=[ 775], 80.00th=[ 816], 90.00th=[ 857], 95.00th=[ 889], 00:09:34.286 | 99.00th=[ 922], 99.50th=[ 947], 99.90th=[ 963], 99.95th=[ 963], 00:09:34.286 | 99.99th=[ 963] 00:09:34.286 bw ( KiB/s): min= 4096, max= 4096, per=50.50%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.286 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.286 lat (usec) : 250=0.76%, 500=6.06%, 750=52.46%, 1000=37.88% 00:09:34.286 lat (msec) : 50=2.84% 00:09:34.286 cpu : usr=1.19%, sys=1.88%, ctx=528, majf=0, minf=2 00:09:34.286 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.286 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.286 00:09:34.286 Run status group 0 (all jobs): 00:09:34.286 READ: bw=463KiB/s (474kB/s), 63.4KiB/s-232KiB/s (65.0kB/s-237kB/s), io=468KiB (479kB), run=1001-1010msec 00:09:34.286 WRITE: bw=8111KiB/s (8306kB/s), 2028KiB/s-2046KiB/s (2076kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1010msec 00:09:34.286 00:09:34.286 Disk stats (read/write): 00:09:34.286 nvme0n1: ios=35/512, merge=0/0, ticks=1339/295, in_queue=1634, util=83.87% 00:09:34.286 nvme0n2: ios=62/512, merge=0/0, ticks=888/248, in_queue=1136, util=87.95% 00:09:34.286 nvme0n3: ios=75/512, merge=0/0, ticks=590/244, in_queue=834, util=95.03% 00:09:34.286 nvme0n4: ios=68/512, merge=0/0, ticks=497/285, in_queue=782, util=95.82% 00:09:34.286 14:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:34.287 [global] 00:09:34.287 thread=1 00:09:34.287 invalidate=1 00:09:34.287 rw=randwrite 00:09:34.287 time_based=1 00:09:34.287 runtime=1 00:09:34.287 ioengine=libaio 00:09:34.287 direct=1 00:09:34.287 bs=4096 00:09:34.287 iodepth=1 00:09:34.287 norandommap=0 00:09:34.287 numjobs=1 00:09:34.287 00:09:34.287 verify_dump=1 00:09:34.287 verify_backlog=512 00:09:34.287 verify_state_save=0 00:09:34.287 do_verify=1 00:09:34.287 verify=crc32c-intel 00:09:34.287 [job0] 00:09:34.287 filename=/dev/nvme0n1 00:09:34.287 [job1] 00:09:34.287 filename=/dev/nvme0n2 00:09:34.287 [job2] 00:09:34.287 filename=/dev/nvme0n3 00:09:34.287 [job3] 00:09:34.287 filename=/dev/nvme0n4 00:09:34.287 Could not set queue depth (nvme0n1) 00:09:34.287 Could not set queue depth (nvme0n2) 00:09:34.287 Could not set queue depth (nvme0n3) 00:09:34.287 Could not set queue depth (nvme0n4) 00:09:34.548 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.548 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.548 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.548 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.548 fio-3.35 00:09:34.548 Starting 4 threads 00:09:35.932 00:09:35.932 job0: (groupid=0, jobs=1): err= 0: pid=3249338: Mon Oct 14 14:23:16 2024 00:09:35.932 read: IOPS=18, BW=73.9KiB/s (75.6kB/s)(76.0KiB/1029msec) 00:09:35.932 slat (nsec): min=26601, max=27360, avg=26918.79, stdev=196.97 00:09:35.932 clat (usec): min=40841, max=41917, avg=41041.49, stdev=242.17 00:09:35.932 lat (usec): min=40868, max=41944, avg=41068.41, stdev=242.16 00:09:35.932 clat percentiles (usec): 00:09:35.932 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:35.932 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:35.932 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:35.932 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:35.932 | 99.99th=[41681] 00:09:35.932 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:09:35.932 slat (nsec): min=9747, max=74334, avg=27409.18, stdev=11153.01 00:09:35.932 clat (usec): min=223, max=1104, avg=450.85, stdev=102.83 00:09:35.932 lat (usec): min=235, max=1138, avg=478.26, stdev=109.07 00:09:35.932 clat percentiles (usec): 00:09:35.932 | 1.00th=[ 265], 5.00th=[ 302], 10.00th=[ 322], 20.00th=[ 363], 00:09:35.932 | 30.00th=[ 396], 40.00th=[ 433], 50.00th=[ 453], 60.00th=[ 474], 00:09:35.932 | 70.00th=[ 490], 80.00th=[ 519], 90.00th=[ 578], 95.00th=[ 611], 00:09:35.932 | 99.00th=[ 742], 99.50th=[ 947], 99.90th=[ 1106], 99.95th=[ 1106], 00:09:35.932 | 99.99th=[ 1106] 00:09:35.932 bw ( KiB/s): min= 4096, max= 4096, per=41.16%, avg=4096.00, stdev= 0.00, samples=1 00:09:35.932 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:35.932 lat (usec) : 250=0.56%, 500=70.43%, 750=24.48%, 1000=0.56% 00:09:35.932 lat (msec) : 2=0.38%, 50=3.58% 00:09:35.932 cpu : usr=0.88%, sys=1.17%, ctx=532, majf=0, minf=1 00:09:35.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.932 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.932 job1: (groupid=0, jobs=1): err= 0: pid=3249339: Mon Oct 14 14:23:16 2024 00:09:35.932 read: IOPS=653, BW=2613KiB/s (2676kB/s)(2616KiB/1001msec) 00:09:35.932 slat (nsec): min=7210, max=61984, avg=24464.36, stdev=7124.18 00:09:35.932 clat (usec): min=404, max=954, avg=762.01, stdev=60.65 00:09:35.932 lat (usec): min=430, max=980, avg=786.47, stdev=62.66 00:09:35.932 clat percentiles (usec): 00:09:35.932 | 1.00th=[ 586], 5.00th=[ 644], 10.00th=[ 676], 20.00th=[ 717], 00:09:35.932 | 30.00th=[ 750], 40.00th=[ 766], 50.00th=[ 775], 60.00th=[ 783], 00:09:35.932 | 70.00th=[ 791], 80.00th=[ 807], 90.00th=[ 824], 95.00th=[ 832], 00:09:35.932 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 955], 00:09:35.932 | 99.99th=[ 955] 00:09:35.932 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:35.932 slat (nsec): min=9852, max=69479, avg=29748.91, stdev=9797.40 00:09:35.932 clat (usec): min=172, max=789, avg=432.47, stdev=82.44 00:09:35.932 lat (usec): min=185, max=823, avg=462.22, stdev=86.84 00:09:35.932 clat percentiles (usec): 00:09:35.932 | 1.00th=[ 253], 5.00th=[ 289], 10.00th=[ 318], 20.00th=[ 355], 00:09:35.932 | 30.00th=[ 383], 40.00th=[ 429], 50.00th=[ 449], 60.00th=[ 465], 00:09:35.932 | 70.00th=[ 478], 80.00th=[ 490], 90.00th=[ 519], 95.00th=[ 553], 00:09:35.932 | 99.00th=[ 652], 99.50th=[ 668], 99.90th=[ 766], 99.95th=[ 791], 00:09:35.932 | 99.99th=[ 791] 00:09:35.932 bw ( KiB/s): min= 4096, max= 4096, per=41.16%, avg=4096.00, stdev= 0.00, samples=1 00:09:35.932 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:35.932 lat (usec) : 250=0.48%, 500=51.85%, 750=20.80%, 1000=26.88% 00:09:35.932 cpu : usr=2.50%, sys=4.70%, ctx=1679, majf=0, minf=1 00:09:35.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.932 issued rwts: total=654,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.932 job2: (groupid=0, jobs=1): err= 0: pid=3249341: Mon Oct 14 14:23:16 2024 00:09:35.932 read: IOPS=33, BW=133KiB/s (136kB/s)(136KiB/1024msec) 00:09:35.932 slat (nsec): min=24957, max=26432, avg=25768.85, stdev=279.53 00:09:35.932 clat (usec): min=680, max=42025, avg=20183.53, stdev=20659.59 00:09:35.932 lat (usec): min=706, max=42051, avg=20209.30, stdev=20659.55 00:09:35.932 clat percentiles (usec): 00:09:35.932 | 1.00th=[ 685], 5.00th=[ 881], 10.00th=[ 914], 20.00th=[ 988], 00:09:35.932 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[41157], 00:09:35.932 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:35.932 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:35.932 | 99.99th=[42206] 00:09:35.932 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:09:35.932 slat (nsec): min=9490, max=63590, avg=30454.43, stdev=7179.99 00:09:35.932 clat (usec): min=176, max=922, avg=619.13, stdev=126.46 00:09:35.932 lat (usec): min=200, max=954, avg=649.58, stdev=128.60 00:09:35.932 clat percentiles (usec): 00:09:35.932 | 1.00th=[ 297], 5.00th=[ 396], 10.00th=[ 453], 20.00th=[ 515], 00:09:35.932 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 668], 00:09:35.932 | 70.00th=[ 693], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 799], 00:09:35.932 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 922], 99.95th=[ 922], 00:09:35.932 | 99.99th=[ 922] 00:09:35.932 bw ( KiB/s): min= 4096, max= 4096, per=41.16%, avg=4096.00, stdev= 0.00, samples=1 00:09:35.932 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:35.932 lat (usec) : 250=0.37%, 500=15.20%, 750=67.22%, 1000=12.45% 00:09:35.932 lat (msec) : 2=1.83%, 50=2.93% 00:09:35.932 cpu : usr=0.88%, sys=1.47%, ctx=546, majf=0, minf=2 00:09:35.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.932 issued rwts: total=34,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.932 job3: (groupid=0, jobs=1): err= 0: pid=3249342: Mon Oct 14 14:23:16 2024 00:09:35.932 read: IOPS=35, BW=141KiB/s (144kB/s)(144KiB/1023msec) 00:09:35.932 slat (nsec): min=26103, max=39362, avg=27776.69, stdev=2136.30 00:09:35.932 clat (usec): min=744, max=42076, avg=19128.50, stdev=20555.27 00:09:35.932 lat (usec): min=771, max=42103, avg=19156.28, stdev=20554.71 00:09:35.932 clat percentiles (usec): 00:09:35.932 | 1.00th=[ 742], 5.00th=[ 906], 10.00th=[ 963], 20.00th=[ 996], 00:09:35.932 | 30.00th=[ 1004], 40.00th=[ 1037], 50.00th=[ 1074], 60.00th=[41157], 00:09:35.932 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:35.932 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:35.932 | 99.99th=[42206] 00:09:35.932 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:09:35.932 slat (nsec): min=9976, max=71540, avg=31151.44, stdev=9825.10 00:09:35.932 clat (usec): min=194, max=938, avg=611.53, stdev=124.03 00:09:35.932 lat (usec): min=204, max=972, avg=642.68, stdev=127.94 00:09:35.932 clat percentiles (usec): 00:09:35.932 | 1.00th=[ 306], 5.00th=[ 375], 10.00th=[ 461], 20.00th=[ 510], 00:09:35.932 | 30.00th=[ 553], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 660], 00:09:35.932 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 750], 95.00th=[ 791], 00:09:35.932 | 99.00th=[ 873], 99.50th=[ 906], 99.90th=[ 938], 99.95th=[ 938], 00:09:35.932 | 99.99th=[ 938] 00:09:35.932 bw ( KiB/s): min= 4096, max= 4096, per=41.16%, avg=4096.00, stdev= 0.00, samples=1 00:09:35.932 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:35.932 lat (usec) : 250=0.36%, 500=15.69%, 750=68.43%, 1000=10.58% 00:09:35.932 lat (msec) : 2=2.01%, 50=2.92% 00:09:35.932 cpu : usr=0.78%, sys=1.57%, ctx=549, majf=0, minf=1 00:09:35.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.932 issued rwts: total=36,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.932 00:09:35.932 Run status group 0 (all jobs): 00:09:35.933 READ: bw=2888KiB/s (2958kB/s), 73.9KiB/s-2613KiB/s (75.6kB/s-2676kB/s), io=2972KiB (3043kB), run=1001-1029msec 00:09:35.933 WRITE: bw=9951KiB/s (10.2MB/s), 1990KiB/s-4092KiB/s (2038kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1029msec 00:09:35.933 00:09:35.933 Disk stats (read/write): 00:09:35.933 nvme0n1: ios=39/512, merge=0/0, ticks=1539/225, in_queue=1764, util=96.59% 00:09:35.933 nvme0n2: ios=552/908, merge=0/0, ticks=1001/383, in_queue=1384, util=97.45% 00:09:35.933 nvme0n3: ios=58/512, merge=0/0, ticks=820/307, in_queue=1127, util=94.81% 00:09:35.933 nvme0n4: ios=81/512, merge=0/0, ticks=805/302, in_queue=1107, util=96.79% 00:09:35.933 14:23:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:35.933 [global] 00:09:35.933 thread=1 00:09:35.933 invalidate=1 00:09:35.933 rw=write 00:09:35.933 time_based=1 00:09:35.933 runtime=1 00:09:35.933 ioengine=libaio 00:09:35.933 direct=1 00:09:35.933 bs=4096 00:09:35.933 iodepth=128 00:09:35.933 norandommap=0 00:09:35.933 numjobs=1 00:09:35.933 00:09:35.933 verify_dump=1 00:09:35.933 verify_backlog=512 00:09:35.933 verify_state_save=0 00:09:35.933 do_verify=1 00:09:35.933 verify=crc32c-intel 00:09:35.933 [job0] 00:09:35.933 filename=/dev/nvme0n1 00:09:35.933 [job1] 00:09:35.933 filename=/dev/nvme0n2 00:09:35.933 [job2] 00:09:35.933 filename=/dev/nvme0n3 00:09:35.933 [job3] 00:09:35.933 filename=/dev/nvme0n4 00:09:35.933 Could not set queue depth (nvme0n1) 00:09:35.933 Could not set queue depth (nvme0n2) 00:09:35.933 Could not set queue depth (nvme0n3) 00:09:35.933 Could not set queue depth (nvme0n4) 00:09:36.193 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.193 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.193 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.193 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.193 fio-3.35 00:09:36.193 Starting 4 threads 00:09:37.605 00:09:37.605 job0: (groupid=0, jobs=1): err= 0: pid=3249860: Mon Oct 14 14:23:17 2024 00:09:37.605 read: IOPS=6077, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1011msec) 00:09:37.605 slat (nsec): min=927, max=16650k, avg=63699.22, stdev=452149.80 00:09:37.605 clat (usec): min=1742, max=32353, avg=8229.57, stdev=2806.17 00:09:37.605 lat (usec): min=1751, max=40040, avg=8293.27, stdev=2846.49 00:09:37.605 clat percentiles (usec): 00:09:37.605 | 1.00th=[ 2999], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 7242], 00:09:37.605 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8160], 00:09:37.605 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 9503], 95.00th=[11076], 00:09:37.605 | 99.00th=[23987], 99.50th=[23987], 99.90th=[31065], 99.95th=[31065], 00:09:37.605 | 99.99th=[32375] 00:09:37.605 write: IOPS=6549, BW=25.6MiB/s (26.8MB/s)(25.9MiB/1011msec); 0 zone resets 00:09:37.605 slat (nsec): min=1604, max=10961k, avg=86521.03, stdev=464465.28 00:09:37.605 clat (usec): min=1177, max=53777, avg=11618.86, stdev=9527.53 00:09:37.605 lat (usec): min=1185, max=53786, avg=11705.38, stdev=9589.57 00:09:37.605 clat percentiles (usec): 00:09:37.605 | 1.00th=[ 4113], 5.00th=[ 5669], 10.00th=[ 6718], 20.00th=[ 7046], 00:09:37.605 | 30.00th=[ 7439], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8094], 00:09:37.605 | 70.00th=[ 8356], 80.00th=[11469], 90.00th=[27132], 95.00th=[35390], 00:09:37.605 | 99.00th=[46924], 99.50th=[50594], 99.90th=[53740], 99.95th=[53740], 00:09:37.605 | 99.99th=[53740] 00:09:37.605 bw ( KiB/s): min=23824, max=28136, per=25.40%, avg=25980.00, stdev=3049.04, samples=2 00:09:37.605 iops : min= 5956, max= 7034, avg=6495.00, stdev=762.26, samples=2 00:09:37.605 lat (msec) : 2=0.21%, 4=1.03%, 10=83.11%, 20=7.49%, 50=7.83% 00:09:37.605 lat (msec) : 100=0.34% 00:09:37.605 cpu : usr=4.16%, sys=5.05%, ctx=817, majf=0, minf=2 00:09:37.605 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:37.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.605 issued rwts: total=6144,6622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.605 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.605 job1: (groupid=0, jobs=1): err= 0: pid=3249861: Mon Oct 14 14:23:17 2024 00:09:37.605 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:09:37.605 slat (nsec): min=892, max=13113k, avg=83807.34, stdev=657585.04 00:09:37.605 clat (usec): min=1833, max=52459, avg=11182.33, stdev=5759.36 00:09:37.605 lat (usec): min=1839, max=52485, avg=11266.14, stdev=5822.47 00:09:37.605 clat percentiles (usec): 00:09:37.605 | 1.00th=[ 4621], 5.00th=[ 5997], 10.00th=[ 7308], 20.00th=[ 8586], 00:09:37.605 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[10683], 00:09:37.605 | 70.00th=[10814], 80.00th=[11469], 90.00th=[14746], 95.00th=[21103], 00:09:37.605 | 99.00th=[37487], 99.50th=[42730], 99.90th=[48497], 99.95th=[48497], 00:09:37.605 | 99.99th=[52691] 00:09:37.605 write: IOPS=6325, BW=24.7MiB/s (25.9MB/s)(24.8MiB/1004msec); 0 zone resets 00:09:37.605 slat (nsec): min=1556, max=12863k, avg=60974.79, stdev=479272.57 00:09:37.605 clat (usec): min=1054, max=39411, avg=9213.97, stdev=4206.58 00:09:37.605 lat (usec): min=1064, max=39434, avg=9274.94, stdev=4241.66 00:09:37.605 clat percentiles (usec): 00:09:37.605 | 1.00th=[ 2474], 5.00th=[ 4047], 10.00th=[ 4752], 20.00th=[ 6652], 00:09:37.605 | 30.00th=[ 7635], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9503], 00:09:37.605 | 70.00th=[10159], 80.00th=[10814], 90.00th=[12387], 95.00th=[16319], 00:09:37.605 | 99.00th=[28181], 99.50th=[29230], 99.90th=[29492], 99.95th=[30278], 00:09:37.606 | 99.99th=[39584] 00:09:37.606 bw ( KiB/s): min=24576, max=25376, per=24.42%, avg=24976.00, stdev=565.69, samples=2 00:09:37.606 iops : min= 6144, max= 6344, avg=6244.00, stdev=141.42, samples=2 00:09:37.606 lat (msec) : 2=0.21%, 4=2.30%, 10=56.11%, 20=37.28%, 50=4.09% 00:09:37.606 lat (msec) : 100=0.01% 00:09:37.606 cpu : usr=4.69%, sys=6.58%, ctx=431, majf=0, minf=1 00:09:37.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:37.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.606 issued rwts: total=6144,6351,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.606 job2: (groupid=0, jobs=1): err= 0: pid=3249862: Mon Oct 14 14:23:17 2024 00:09:37.606 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:09:37.606 slat (nsec): min=927, max=7439.7k, avg=71789.64, stdev=478937.08 00:09:37.606 clat (usec): min=4390, max=32785, avg=9315.81, stdev=2598.72 00:09:37.606 lat (usec): min=4600, max=32819, avg=9387.60, stdev=2641.67 00:09:37.606 clat percentiles (usec): 00:09:37.606 | 1.00th=[ 5342], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7701], 00:09:37.606 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:09:37.606 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[11600], 95.00th=[15533], 00:09:37.606 | 99.00th=[19268], 99.50th=[19530], 99.90th=[28967], 99.95th=[28967], 00:09:37.606 | 99.99th=[32900] 00:09:37.606 write: IOPS=7053, BW=27.6MiB/s (28.9MB/s)(27.6MiB/1003msec); 0 zone resets 00:09:37.606 slat (nsec): min=1569, max=14163k, avg=69645.22, stdev=439630.54 00:09:37.606 clat (usec): min=637, max=42241, avg=9188.19, stdev=3570.67 00:09:37.606 lat (usec): min=3321, max=42273, avg=9257.83, stdev=3603.56 00:09:37.606 clat percentiles (usec): 00:09:37.606 | 1.00th=[ 4621], 5.00th=[ 5866], 10.00th=[ 6521], 20.00th=[ 7177], 00:09:37.606 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8848], 00:09:37.606 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[11600], 95.00th=[15139], 00:09:37.606 | 99.00th=[28443], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:09:37.606 | 99.99th=[42206] 00:09:37.606 bw ( KiB/s): min=27520, max=28056, per=27.17%, avg=27788.00, stdev=379.01, samples=2 00:09:37.606 iops : min= 6880, max= 7014, avg=6947.00, stdev=94.75, samples=2 00:09:37.606 lat (usec) : 750=0.01% 00:09:37.606 lat (msec) : 4=0.04%, 10=84.36%, 20=14.14%, 50=1.45% 00:09:37.606 cpu : usr=4.59%, sys=5.89%, ctx=744, majf=0, minf=1 00:09:37.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:37.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.606 issued rwts: total=6656,7075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.606 job3: (groupid=0, jobs=1): err= 0: pid=3249863: Mon Oct 14 14:23:17 2024 00:09:37.606 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:09:37.606 slat (nsec): min=1019, max=9431.0k, avg=85535.85, stdev=617884.53 00:09:37.606 clat (usec): min=3751, max=29330, avg=10720.74, stdev=3301.58 00:09:37.606 lat (usec): min=3760, max=29333, avg=10806.27, stdev=3344.98 00:09:37.606 clat percentiles (usec): 00:09:37.606 | 1.00th=[ 5735], 5.00th=[ 7242], 10.00th=[ 7832], 20.00th=[ 8455], 00:09:37.606 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10290], 00:09:37.606 | 70.00th=[11207], 80.00th=[13173], 90.00th=[15401], 95.00th=[16057], 00:09:37.606 | 99.00th=[23462], 99.50th=[26608], 99.90th=[28443], 99.95th=[29230], 00:09:37.606 | 99.99th=[29230] 00:09:37.606 write: IOPS=5742, BW=22.4MiB/s (23.5MB/s)(22.7MiB/1011msec); 0 zone resets 00:09:37.606 slat (nsec): min=1729, max=8932.8k, avg=80144.70, stdev=492741.73 00:09:37.606 clat (usec): min=1301, max=48160, avg=11677.03, stdev=6937.96 00:09:37.606 lat (usec): min=1315, max=48163, avg=11757.17, stdev=6986.89 00:09:37.606 clat percentiles (usec): 00:09:37.606 | 1.00th=[ 3556], 5.00th=[ 5604], 10.00th=[ 6325], 20.00th=[ 7767], 00:09:37.606 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9372], 00:09:37.606 | 70.00th=[11469], 80.00th=[14877], 90.00th=[20055], 95.00th=[27657], 00:09:37.606 | 99.00th=[41681], 99.50th=[46924], 99.90th=[47973], 99.95th=[47973], 00:09:37.606 | 99.99th=[47973] 00:09:37.606 bw ( KiB/s): min=22216, max=23216, per=22.21%, avg=22716.00, stdev=707.11, samples=2 00:09:37.606 iops : min= 5554, max= 5804, avg=5679.00, stdev=176.78, samples=2 00:09:37.606 lat (msec) : 2=0.02%, 4=1.05%, 10=58.81%, 20=34.18%, 50=5.95% 00:09:37.606 cpu : usr=5.45%, sys=5.45%, ctx=492, majf=0, minf=2 00:09:37.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:37.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.606 issued rwts: total=5632,5806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.606 00:09:37.606 Run status group 0 (all jobs): 00:09:37.606 READ: bw=95.0MiB/s (99.6MB/s), 21.8MiB/s-25.9MiB/s (22.8MB/s-27.2MB/s), io=96.0MiB (101MB), run=1003-1011msec 00:09:37.606 WRITE: bw=99.9MiB/s (105MB/s), 22.4MiB/s-27.6MiB/s (23.5MB/s-28.9MB/s), io=101MiB (106MB), run=1003-1011msec 00:09:37.606 00:09:37.606 Disk stats (read/write): 00:09:37.606 nvme0n1: ios=5234/5632, merge=0/0, ticks=22732/37805, in_queue=60537, util=96.59% 00:09:37.606 nvme0n2: ios=5053/5120, merge=0/0, ticks=47049/37914, in_queue=84963, util=96.73% 00:09:37.606 nvme0n3: ios=5616/5632, merge=0/0, ticks=25412/24930, in_queue=50342, util=96.83% 00:09:37.606 nvme0n4: ios=4647/5119, merge=0/0, ticks=44398/49137, in_queue=93535, util=100.00% 00:09:37.606 14:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:37.606 [global] 00:09:37.606 thread=1 00:09:37.606 invalidate=1 00:09:37.606 rw=randwrite 00:09:37.606 time_based=1 00:09:37.606 runtime=1 00:09:37.606 ioengine=libaio 00:09:37.606 direct=1 00:09:37.606 bs=4096 00:09:37.606 iodepth=128 00:09:37.606 norandommap=0 00:09:37.606 numjobs=1 00:09:37.606 00:09:37.606 verify_dump=1 00:09:37.606 verify_backlog=512 00:09:37.606 verify_state_save=0 00:09:37.606 do_verify=1 00:09:37.606 verify=crc32c-intel 00:09:37.606 [job0] 00:09:37.606 filename=/dev/nvme0n1 00:09:37.606 [job1] 00:09:37.606 filename=/dev/nvme0n2 00:09:37.606 [job2] 00:09:37.606 filename=/dev/nvme0n3 00:09:37.606 [job3] 00:09:37.606 filename=/dev/nvme0n4 00:09:37.606 Could not set queue depth (nvme0n1) 00:09:37.606 Could not set queue depth (nvme0n2) 00:09:37.606 Could not set queue depth (nvme0n3) 00:09:37.606 Could not set queue depth (nvme0n4) 00:09:37.871 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:37.871 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:37.871 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:37.871 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:37.871 fio-3.35 00:09:37.871 Starting 4 threads 00:09:39.287 00:09:39.287 job0: (groupid=0, jobs=1): err= 0: pid=3250389: Mon Oct 14 14:23:19 2024 00:09:39.287 read: IOPS=2617, BW=10.2MiB/s (10.7MB/s)(10.4MiB/1014msec) 00:09:39.287 slat (nsec): min=1028, max=24815k, avg=175605.22, stdev=1272238.56 00:09:39.287 clat (msec): min=9, max=100, avg=18.88, stdev=14.06 00:09:39.287 lat (msec): min=9, max=100, avg=19.06, stdev=14.22 00:09:39.287 clat percentiles (msec): 00:09:39.287 | 1.00th=[ 10], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:09:39.287 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:09:39.287 | 70.00th=[ 18], 80.00th=[ 25], 90.00th=[ 33], 95.00th=[ 41], 00:09:39.287 | 99.00th=[ 89], 99.50th=[ 90], 99.90th=[ 101], 99.95th=[ 102], 00:09:39.287 | 99.99th=[ 102] 00:09:39.287 write: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec); 0 zone resets 00:09:39.287 slat (nsec): min=1603, max=16094k, avg=168755.18, stdev=906491.41 00:09:39.287 clat (usec): min=1213, max=100673, avg=25554.18, stdev=19878.31 00:09:39.287 lat (usec): min=1226, max=100682, avg=25722.94, stdev=19989.65 00:09:39.287 clat percentiles (msec): 00:09:39.287 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:09:39.287 | 30.00th=[ 10], 40.00th=[ 13], 50.00th=[ 18], 60.00th=[ 29], 00:09:39.287 | 70.00th=[ 34], 80.00th=[ 40], 90.00th=[ 50], 95.00th=[ 73], 00:09:39.287 | 99.00th=[ 92], 99.50th=[ 96], 99.90th=[ 99], 99.95th=[ 102], 00:09:39.287 | 99.99th=[ 102] 00:09:39.287 bw ( KiB/s): min=12016, max=12288, per=15.72%, avg=12152.00, stdev=192.33, samples=2 00:09:39.287 iops : min= 3004, max= 3072, avg=3038.00, stdev=48.08, samples=2 00:09:39.287 lat (msec) : 2=0.03%, 10=18.98%, 20=42.65%, 50=32.38%, 100=5.83% 00:09:39.287 lat (msec) : 250=0.12% 00:09:39.287 cpu : usr=2.27%, sys=3.16%, ctx=255, majf=0, minf=1 00:09:39.287 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:39.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.287 issued rwts: total=2654,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.287 job1: (groupid=0, jobs=1): err= 0: pid=3250390: Mon Oct 14 14:23:19 2024 00:09:39.287 read: IOPS=11.2k, BW=43.8MiB/s (45.9MB/s)(43.9MiB/1004msec) 00:09:39.287 slat (nsec): min=909, max=5507.5k, avg=46584.97, stdev=329283.10 00:09:39.287 clat (usec): min=1621, max=12125, avg=6123.53, stdev=1485.72 00:09:39.287 lat (usec): min=2225, max=12127, avg=6170.11, stdev=1499.61 00:09:39.287 clat percentiles (usec): 00:09:39.287 | 1.00th=[ 2933], 5.00th=[ 4228], 10.00th=[ 4555], 20.00th=[ 5014], 00:09:39.288 | 30.00th=[ 5276], 40.00th=[ 5538], 50.00th=[ 5800], 60.00th=[ 6194], 00:09:39.288 | 70.00th=[ 6718], 80.00th=[ 7177], 90.00th=[ 8455], 95.00th=[ 8979], 00:09:39.288 | 99.00th=[10028], 99.50th=[10421], 99.90th=[11600], 99.95th=[11600], 00:09:39.288 | 99.99th=[12125] 00:09:39.288 write: IOPS=11.2k, BW=43.8MiB/s (46.0MB/s)(44.0MiB/1004msec); 0 zone resets 00:09:39.288 slat (nsec): min=1548, max=5040.9k, avg=38956.86, stdev=245031.79 00:09:39.288 clat (usec): min=1097, max=12122, avg=5193.02, stdev=1301.97 00:09:39.288 lat (usec): min=1108, max=12130, avg=5231.97, stdev=1307.70 00:09:39.288 clat percentiles (usec): 00:09:39.288 | 1.00th=[ 1926], 5.00th=[ 2966], 10.00th=[ 3392], 20.00th=[ 3785], 00:09:39.288 | 30.00th=[ 4752], 40.00th=[ 5276], 50.00th=[ 5473], 60.00th=[ 5735], 00:09:39.288 | 70.00th=[ 5800], 80.00th=[ 5997], 90.00th=[ 6456], 95.00th=[ 7373], 00:09:39.288 | 99.00th=[ 7898], 99.50th=[ 8848], 99.90th=[10552], 99.95th=[10945], 00:09:39.288 | 99.99th=[12125] 00:09:39.288 bw ( KiB/s): min=44688, max=45424, per=58.27%, avg=45056.00, stdev=520.43, samples=2 00:09:39.288 iops : min=11172, max=11356, avg=11264.00, stdev=130.11, samples=2 00:09:39.288 lat (msec) : 2=0.59%, 4=12.27%, 10=86.47%, 20=0.67% 00:09:39.288 cpu : usr=4.79%, sys=7.78%, ctx=1021, majf=0, minf=2 00:09:39.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:09:39.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.288 issued rwts: total=11251,11264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.288 job2: (groupid=0, jobs=1): err= 0: pid=3250392: Mon Oct 14 14:23:19 2024 00:09:39.288 read: IOPS=3023, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1016msec) 00:09:39.288 slat (nsec): min=1002, max=16279k, avg=109700.92, stdev=870375.05 00:09:39.288 clat (usec): min=1550, max=35268, avg=14096.73, stdev=6510.85 00:09:39.288 lat (usec): min=1572, max=35295, avg=14206.43, stdev=6572.06 00:09:39.288 clat percentiles (usec): 00:09:39.288 | 1.00th=[ 1680], 5.00th=[ 2835], 10.00th=[ 5473], 20.00th=[11076], 00:09:39.288 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12518], 60.00th=[14353], 00:09:39.288 | 70.00th=[17171], 80.00th=[18744], 90.00th=[21890], 95.00th=[26870], 00:09:39.288 | 99.00th=[31851], 99.50th=[33162], 99.90th=[34866], 99.95th=[34866], 00:09:39.288 | 99.99th=[35390] 00:09:39.288 write: IOPS=3363, BW=13.1MiB/s (13.8MB/s)(13.3MiB/1016msec); 0 zone resets 00:09:39.288 slat (nsec): min=1715, max=20495k, avg=179315.03, stdev=1010113.75 00:09:39.288 clat (usec): min=618, max=110206, avg=24909.85, stdev=23477.09 00:09:39.288 lat (usec): min=640, max=110215, avg=25089.17, stdev=23637.24 00:09:39.288 clat percentiles (usec): 00:09:39.288 | 1.00th=[ 1020], 5.00th=[ 1614], 10.00th=[ 6194], 20.00th=[ 9110], 00:09:39.288 | 30.00th=[ 9896], 40.00th=[ 10159], 50.00th=[ 15926], 60.00th=[ 22414], 00:09:39.288 | 70.00th=[ 31851], 80.00th=[ 39584], 90.00th=[ 51643], 95.00th=[ 83362], 00:09:39.288 | 99.00th=[108528], 99.50th=[108528], 99.90th=[110625], 99.95th=[110625], 00:09:39.288 | 99.99th=[110625] 00:09:39.288 bw ( KiB/s): min=12960, max=13360, per=17.02%, avg=13160.00, stdev=282.84, samples=2 00:09:39.288 iops : min= 3240, max= 3340, avg=3290.00, stdev=70.71, samples=2 00:09:39.288 lat (usec) : 750=0.09%, 1000=0.18% 00:09:39.288 lat (msec) : 2=3.67%, 4=5.09%, 10=17.35%, 20=43.37%, 50=24.55% 00:09:39.288 lat (msec) : 100=4.05%, 250=1.65% 00:09:39.288 cpu : usr=2.36%, sys=3.94%, ctx=300, majf=0, minf=2 00:09:39.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:39.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.288 issued rwts: total=3072,3417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.288 job3: (groupid=0, jobs=1): err= 0: pid=3250393: Mon Oct 14 14:23:19 2024 00:09:39.288 read: IOPS=1513, BW=6053KiB/s (6198kB/s)(6144KiB/1015msec) 00:09:39.288 slat (usec): min=2, max=20090, avg=205.71, stdev=1274.79 00:09:39.288 clat (usec): min=11054, max=82028, avg=21395.55, stdev=12205.40 00:09:39.288 lat (usec): min=11059, max=82034, avg=21601.26, stdev=12367.95 00:09:39.288 clat percentiles (usec): 00:09:39.288 | 1.00th=[11469], 5.00th=[14615], 10.00th=[15401], 20.00th=[15664], 00:09:39.288 | 30.00th=[15795], 40.00th=[15926], 50.00th=[15926], 60.00th=[16909], 00:09:39.288 | 70.00th=[18482], 80.00th=[21365], 90.00th=[36963], 95.00th=[47449], 00:09:39.288 | 99.00th=[71828], 99.50th=[77071], 99.90th=[82314], 99.95th=[82314], 00:09:39.288 | 99.99th=[82314] 00:09:39.288 write: IOPS=1859, BW=7436KiB/s (7615kB/s)(7548KiB/1015msec); 0 zone resets 00:09:39.288 slat (nsec): min=1559, max=32926k, avg=357891.33, stdev=1764391.27 00:09:39.288 clat (msec): min=13, max=114, avg=50.40, stdev=25.05 00:09:39.288 lat (msec): min=15, max=114, avg=50.76, stdev=25.24 00:09:39.288 clat percentiles (msec): 00:09:39.288 | 1.00th=[ 16], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 28], 00:09:39.288 | 30.00th=[ 34], 40.00th=[ 41], 50.00th=[ 43], 60.00th=[ 51], 00:09:39.288 | 70.00th=[ 58], 80.00th=[ 78], 90.00th=[ 91], 95.00th=[ 97], 00:09:39.288 | 99.00th=[ 113], 99.50th=[ 113], 99.90th=[ 114], 99.95th=[ 114], 00:09:39.288 | 99.99th=[ 114] 00:09:39.288 bw ( KiB/s): min= 6592, max= 7480, per=9.10%, avg=7036.00, stdev=627.91, samples=2 00:09:39.288 iops : min= 1648, max= 1870, avg=1759.00, stdev=156.98, samples=2 00:09:39.288 lat (msec) : 20=36.90%, 50=39.79%, 100=21.53%, 250=1.78% 00:09:39.288 cpu : usr=1.28%, sys=1.97%, ctx=222, majf=0, minf=1 00:09:39.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:09:39.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.288 issued rwts: total=1536,1887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.288 00:09:39.288 Run status group 0 (all jobs): 00:09:39.288 READ: bw=71.2MiB/s (74.6MB/s), 6053KiB/s-43.8MiB/s (6198kB/s-45.9MB/s), io=72.3MiB (75.8MB), run=1004-1016msec 00:09:39.288 WRITE: bw=75.5MiB/s (79.2MB/s), 7436KiB/s-43.8MiB/s (7615kB/s-46.0MB/s), io=76.7MiB (80.4MB), run=1004-1016msec 00:09:39.288 00:09:39.288 Disk stats (read/write): 00:09:39.288 nvme0n1: ios=2349/2560, merge=0/0, ticks=43616/58526, in_queue=102142, util=87.78% 00:09:39.288 nvme0n2: ios=9254/9560, merge=0/0, ticks=54403/47862, in_queue=102265, util=87.87% 00:09:39.288 nvme0n3: ios=2615/2663, merge=0/0, ticks=36098/62286, in_queue=98384, util=96.83% 00:09:39.288 nvme0n4: ios=1024/1495, merge=0/0, ticks=12820/38440, in_queue=51260, util=89.52% 00:09:39.288 14:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:39.288 14:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3250725 00:09:39.288 14:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:39.288 14:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:39.288 [global] 00:09:39.288 thread=1 00:09:39.288 invalidate=1 00:09:39.288 rw=read 00:09:39.288 time_based=1 00:09:39.288 runtime=10 00:09:39.288 ioengine=libaio 00:09:39.288 direct=1 00:09:39.288 bs=4096 00:09:39.288 iodepth=1 00:09:39.288 norandommap=1 00:09:39.288 numjobs=1 00:09:39.288 00:09:39.288 [job0] 00:09:39.288 filename=/dev/nvme0n1 00:09:39.288 [job1] 00:09:39.288 filename=/dev/nvme0n2 00:09:39.288 [job2] 00:09:39.288 filename=/dev/nvme0n3 00:09:39.288 [job3] 00:09:39.288 filename=/dev/nvme0n4 00:09:39.288 Could not set queue depth (nvme0n1) 00:09:39.288 Could not set queue depth (nvme0n2) 00:09:39.288 Could not set queue depth (nvme0n3) 00:09:39.288 Could not set queue depth (nvme0n4) 00:09:39.551 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.551 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.551 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.551 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.551 fio-3.35 00:09:39.551 Starting 4 threads 00:09:42.098 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:42.359 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9138176, buflen=4096 00:09:42.359 fio: pid=3250918, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:42.359 14:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:42.359 14:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.359 14:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:42.359 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2060288, buflen=4096 00:09:42.359 fio: pid=3250917, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:42.620 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6672384, buflen=4096 00:09:42.621 fio: pid=3250915, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:42.621 14:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.621 14:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:42.881 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=13037568, buflen=4096 00:09:42.881 fio: pid=3250916, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:42.881 14:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.881 14:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:42.881 00:09:42.881 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3250915: Mon Oct 14 14:23:23 2024 00:09:42.881 read: IOPS=548, BW=2193KiB/s (2246kB/s)(6516KiB/2971msec) 00:09:42.881 slat (usec): min=6, max=11039, avg=35.53, stdev=304.78 00:09:42.881 clat (usec): min=569, max=42074, avg=1768.35, stdev=5563.80 00:09:42.881 lat (usec): min=596, max=42099, avg=1803.89, stdev=5570.78 00:09:42.881 clat percentiles (usec): 00:09:42.881 | 1.00th=[ 725], 5.00th=[ 824], 10.00th=[ 865], 20.00th=[ 922], 00:09:42.881 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1020], 00:09:42.881 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1156], 00:09:42.881 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:42.881 | 99.99th=[42206] 00:09:42.881 bw ( KiB/s): min= 656, max= 3912, per=23.03%, avg=2201.60, stdev=1615.78, samples=5 00:09:42.881 iops : min= 164, max= 978, avg=550.40, stdev=403.94, samples=5 00:09:42.881 lat (usec) : 750=1.41%, 1000=46.20% 00:09:42.881 lat (msec) : 2=50.43%, 50=1.90% 00:09:42.881 cpu : usr=0.74%, sys=1.55%, ctx=1632, majf=0, minf=1 00:09:42.881 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.881 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.881 issued rwts: total=1630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.881 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.881 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3250916: Mon Oct 14 14:23:23 2024 00:09:42.881 read: IOPS=1008, BW=4032KiB/s (4128kB/s)(12.4MiB/3158msec) 00:09:42.881 slat (usec): min=6, max=9367, avg=36.03, stdev=271.70 00:09:42.881 clat (usec): min=303, max=2725, avg=942.94, stdev=155.39 00:09:42.881 lat (usec): min=330, max=10404, avg=978.97, stdev=316.58 00:09:42.881 clat percentiles (usec): 00:09:42.881 | 1.00th=[ 570], 5.00th=[ 693], 10.00th=[ 742], 20.00th=[ 807], 00:09:42.881 | 30.00th=[ 857], 40.00th=[ 914], 50.00th=[ 955], 60.00th=[ 996], 00:09:42.881 | 70.00th=[ 1037], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1172], 00:09:42.881 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1516], 99.95th=[ 1893], 00:09:42.881 | 99.99th=[ 2737] 00:09:42.881 bw ( KiB/s): min= 3835, max= 4240, per=42.84%, avg=4095.17, stdev=155.88, samples=6 00:09:42.882 iops : min= 958, max= 1060, avg=1023.67, stdev=39.22, samples=6 00:09:42.882 lat (usec) : 500=0.16%, 750=11.18%, 1000=50.06% 00:09:42.882 lat (msec) : 2=38.54%, 4=0.03% 00:09:42.882 cpu : usr=1.58%, sys=4.21%, ctx=3188, majf=0, minf=2 00:09:42.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.882 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.882 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.882 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3250917: Mon Oct 14 14:23:23 2024 00:09:42.882 read: IOPS=179, BW=715KiB/s (732kB/s)(2012KiB/2815msec) 00:09:42.882 slat (usec): min=7, max=9526, avg=44.59, stdev=423.28 00:09:42.882 clat (usec): min=586, max=42192, avg=5501.23, stdev=12714.36 00:09:42.882 lat (usec): min=614, max=42223, avg=5545.86, stdev=12714.82 00:09:42.882 clat percentiles (usec): 00:09:42.882 | 1.00th=[ 660], 5.00th=[ 791], 10.00th=[ 848], 20.00th=[ 906], 00:09:42.882 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1020], 60.00th=[ 1057], 00:09:42.882 | 70.00th=[ 1123], 80.00th=[ 1188], 90.00th=[41157], 95.00th=[41157], 00:09:42.882 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:42.882 | 99.99th=[42206] 00:09:42.882 bw ( KiB/s): min= 112, max= 1064, per=6.68%, avg=638.40, stdev=341.17, samples=5 00:09:42.882 iops : min= 28, max= 266, avg=159.60, stdev=85.29, samples=5 00:09:42.882 lat (usec) : 750=2.38%, 1000=39.29% 00:09:42.882 lat (msec) : 2=47.02%, 50=11.11% 00:09:42.882 cpu : usr=0.25%, sys=0.64%, ctx=507, majf=0, minf=2 00:09:42.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.882 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.882 issued rwts: total=504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.882 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3250918: Mon Oct 14 14:23:23 2024 00:09:42.882 read: IOPS=858, BW=3431KiB/s (3513kB/s)(8924KiB/2601msec) 00:09:42.882 slat (nsec): min=7117, max=64262, avg=26528.91, stdev=3365.44 00:09:42.882 clat (usec): min=559, max=1459, avg=1122.99, stdev=105.37 00:09:42.882 lat (usec): min=586, max=1485, avg=1149.52, stdev=105.36 00:09:42.882 clat percentiles (usec): 00:09:42.882 | 1.00th=[ 816], 5.00th=[ 938], 10.00th=[ 988], 20.00th=[ 1045], 00:09:42.882 | 30.00th=[ 1090], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1156], 00:09:42.882 | 70.00th=[ 1188], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[ 1270], 00:09:42.882 | 99.00th=[ 1336], 99.50th=[ 1352], 99.90th=[ 1418], 99.95th=[ 1434], 00:09:42.882 | 99.99th=[ 1467] 00:09:42.882 bw ( KiB/s): min= 3400, max= 3528, per=36.27%, avg=3467.20, stdev=55.02, samples=5 00:09:42.882 iops : min= 850, max= 882, avg=866.80, stdev=13.75, samples=5 00:09:42.882 lat (usec) : 750=0.49%, 1000=10.89% 00:09:42.882 lat (msec) : 2=88.58% 00:09:42.882 cpu : usr=0.96%, sys=2.58%, ctx=2233, majf=0, minf=2 00:09:42.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.882 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.882 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.882 00:09:42.882 Run status group 0 (all jobs): 00:09:42.882 READ: bw=9558KiB/s (9787kB/s), 715KiB/s-4032KiB/s (732kB/s-4128kB/s), io=29.5MiB (30.9MB), run=2601-3158msec 00:09:42.882 00:09:42.882 Disk stats (read/write): 00:09:42.882 nvme0n1: ios=1516/0, merge=0/0, ticks=2724/0, in_queue=2724, util=94.26% 00:09:42.882 nvme0n2: ios=3151/0, merge=0/0, ticks=2771/0, in_queue=2771, util=95.20% 00:09:42.882 nvme0n3: ios=468/0, merge=0/0, ticks=2563/0, in_queue=2563, util=96.03% 00:09:42.882 nvme0n4: ios=2231/0, merge=0/0, ticks=2417/0, in_queue=2417, util=96.46% 00:09:42.882 14:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.882 14:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:43.143 14:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.143 14:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:43.404 14:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.404 14:23:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:43.665 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.665 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:43.665 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:43.665 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3250725 00:09:43.665 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:43.665 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:43.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:43.928 nvmf hotplug test: fio failed as expected 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.928 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.928 rmmod nvme_tcp 00:09:44.189 rmmod nvme_fabrics 00:09:44.189 rmmod nvme_keyring 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 3247163 ']' 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 3247163 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3247163 ']' 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3247163 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3247163 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3247163' 00:09:44.189 killing process with pid 3247163 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3247163 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3247163 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.189 14:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.735 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:46.735 00:09:46.735 real 0m29.062s 00:09:46.735 user 2m34.776s 00:09:46.735 sys 0m9.349s 00:09:46.735 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.735 14:23:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.735 ************************************ 00:09:46.735 END TEST nvmf_fio_target 00:09:46.735 ************************************ 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.735 ************************************ 00:09:46.735 START TEST nvmf_bdevio 00:09:46.735 ************************************ 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:46.735 * Looking for test storage... 00:09:46.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:46.735 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:46.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.736 --rc genhtml_branch_coverage=1 00:09:46.736 --rc genhtml_function_coverage=1 00:09:46.736 --rc genhtml_legend=1 00:09:46.736 --rc geninfo_all_blocks=1 00:09:46.736 --rc geninfo_unexecuted_blocks=1 00:09:46.736 00:09:46.736 ' 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:46.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.736 --rc genhtml_branch_coverage=1 00:09:46.736 --rc genhtml_function_coverage=1 00:09:46.736 --rc genhtml_legend=1 00:09:46.736 --rc geninfo_all_blocks=1 00:09:46.736 --rc geninfo_unexecuted_blocks=1 00:09:46.736 00:09:46.736 ' 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:46.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.736 --rc genhtml_branch_coverage=1 00:09:46.736 --rc genhtml_function_coverage=1 00:09:46.736 --rc genhtml_legend=1 00:09:46.736 --rc geninfo_all_blocks=1 00:09:46.736 --rc geninfo_unexecuted_blocks=1 00:09:46.736 00:09:46.736 ' 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:46.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.736 --rc genhtml_branch_coverage=1 00:09:46.736 --rc genhtml_function_coverage=1 00:09:46.736 --rc genhtml_legend=1 00:09:46.736 --rc geninfo_all_blocks=1 00:09:46.736 --rc geninfo_unexecuted_blocks=1 00:09:46.736 00:09:46.736 ' 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:46.736 14:23:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.881 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:54.882 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:54.882 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:54.882 Found net devices under 0000:31:00.0: cvl_0_0 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:54.882 Found net devices under 0000:31:00.1: cvl_0_1 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:54.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:09:54.882 00:09:54.882 --- 10.0.0.2 ping statistics --- 00:09:54.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.882 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:09:54.882 00:09:54.882 --- 10.0.0.1 ping statistics --- 00:09:54.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.882 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=3256281 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 3256281 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3256281 ']' 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.882 14:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:54.882 [2024-10-14 14:23:34.840276] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:09:54.882 [2024-10-14 14:23:34.840341] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.882 [2024-10-14 14:23:34.933749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.882 [2024-10-14 14:23:34.984502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.882 [2024-10-14 14:23:34.984549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.882 [2024-10-14 14:23:34.984558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.883 [2024-10-14 14:23:34.984565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.883 [2024-10-14 14:23:34.984571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.883 [2024-10-14 14:23:34.986623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:54.883 [2024-10-14 14:23:34.986781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:54.883 [2024-10-14 14:23:34.986943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:54.883 [2024-10-14 14:23:34.986944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 [2024-10-14 14:23:35.721438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 Malloc0 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:55.144 [2024-10-14 14:23:35.805756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:55.144 { 00:09:55.144 "params": { 00:09:55.144 "name": "Nvme$subsystem", 00:09:55.144 "trtype": "$TEST_TRANSPORT", 00:09:55.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:55.144 "adrfam": "ipv4", 00:09:55.144 "trsvcid": "$NVMF_PORT", 00:09:55.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:55.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:55.144 "hdgst": ${hdgst:-false}, 00:09:55.144 "ddgst": ${ddgst:-false} 00:09:55.144 }, 00:09:55.144 "method": "bdev_nvme_attach_controller" 00:09:55.144 } 00:09:55.144 EOF 00:09:55.144 )") 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:09:55.144 14:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:55.144 "params": { 00:09:55.144 "name": "Nvme1", 00:09:55.144 "trtype": "tcp", 00:09:55.144 "traddr": "10.0.0.2", 00:09:55.144 "adrfam": "ipv4", 00:09:55.144 "trsvcid": "4420", 00:09:55.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:55.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:55.144 "hdgst": false, 00:09:55.144 "ddgst": false 00:09:55.144 }, 00:09:55.144 "method": "bdev_nvme_attach_controller" 00:09:55.144 }' 00:09:55.144 [2024-10-14 14:23:35.872722] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:09:55.144 [2024-10-14 14:23:35.872798] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256376 ] 00:09:55.405 [2024-10-14 14:23:35.942449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:55.405 [2024-10-14 14:23:35.988122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.405 [2024-10-14 14:23:35.988401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.406 [2024-10-14 14:23:35.988405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.666 I/O targets: 00:09:55.666 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:55.666 00:09:55.666 00:09:55.666 CUnit - A unit testing framework for C - Version 2.1-3 00:09:55.666 http://cunit.sourceforge.net/ 00:09:55.666 00:09:55.666 00:09:55.666 Suite: bdevio tests on: Nvme1n1 00:09:55.666 Test: blockdev write read block ...passed 00:09:55.666 Test: blockdev write zeroes read block ...passed 00:09:55.666 Test: blockdev write zeroes read no split ...passed 00:09:55.928 Test: blockdev write zeroes read split ...passed 00:09:55.928 Test: blockdev write zeroes read split partial ...passed 00:09:55.928 Test: blockdev reset ...[2024-10-14 14:23:36.456340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:55.928 [2024-10-14 14:23:36.456402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e78000 (9): Bad file descriptor 00:09:55.928 [2024-10-14 14:23:36.605994] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:55.928 passed 00:09:55.928 Test: blockdev write read 8 blocks ...passed 00:09:56.188 Test: blockdev write read size > 128k ...passed 00:09:56.188 Test: blockdev write read invalid size ...passed 00:09:56.188 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:56.188 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:56.188 Test: blockdev write read max offset ...passed 00:09:56.188 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:56.188 Test: blockdev writev readv 8 blocks ...passed 00:09:56.188 Test: blockdev writev readv 30 x 1block ...passed 00:09:56.449 Test: blockdev writev readv block ...passed 00:09:56.449 Test: blockdev writev readv size > 128k ...passed 00:09:56.449 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:56.449 Test: blockdev comparev and writev ...[2024-10-14 14:23:36.988139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.449 [2024-10-14 14:23:36.988164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:56.449 [2024-10-14 14:23:36.988175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.449 [2024-10-14 14:23:36.988181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:56.449 [2024-10-14 14:23:36.988404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.449 [2024-10-14 14:23:36.988412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:56.449 [2024-10-14 14:23:36.988426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.449 [2024-10-14 14:23:36.988432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:56.449 [2024-10-14 14:23:36.988648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.449 [2024-10-14 14:23:36.988656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:56.449 [2024-10-14 14:23:36.988666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.449 [2024-10-14 14:23:36.988671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:56.449 [2024-10-14 14:23:36.988907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.449 [2024-10-14 14:23:36.988914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:56.449 [2024-10-14 14:23:36.988924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:56.449 [2024-10-14 14:23:36.988930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:56.449 passed 00:09:56.449 Test: blockdev nvme passthru rw ...passed 00:09:56.449 Test: blockdev nvme passthru vendor specific ...[2024-10-14 14:23:37.073414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.449 [2024-10-14 14:23:37.073452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:56.449 [2024-10-14 14:23:37.073545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.449 [2024-10-14 14:23:37.073552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:56.449 [2024-10-14 14:23:37.073644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.449 [2024-10-14 14:23:37.073651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:56.449 [2024-10-14 14:23:37.073746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:56.449 [2024-10-14 14:23:37.073753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:56.449 passed 00:09:56.449 Test: blockdev nvme admin passthru ...passed 00:09:56.449 Test: blockdev copy ...passed 00:09:56.449 00:09:56.449 Run Summary: Type Total Ran Passed Failed Inactive 00:09:56.449 suites 1 1 n/a 0 0 00:09:56.449 tests 23 23 23 0 0 00:09:56.449 asserts 152 152 152 0 n/a 00:09:56.449 00:09:56.449 Elapsed time = 1.669 seconds 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.710 rmmod nvme_tcp 00:09:56.710 rmmod nvme_fabrics 00:09:56.710 rmmod nvme_keyring 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 3256281 ']' 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 3256281 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3256281 ']' 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3256281 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3256281 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3256281' 00:09:56.710 killing process with pid 3256281 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3256281 00:09:56.710 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3256281 00:09:56.972 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:56.972 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:56.972 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:56.972 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:56.972 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:09:56.972 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:56.972 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:09:56.972 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.972 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:56.972 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.972 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.972 14:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.886 14:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:58.886 00:09:58.886 real 0m12.508s 00:09:58.886 user 0m15.094s 00:09:58.886 sys 0m6.206s 00:09:58.886 14:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.886 14:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.886 ************************************ 00:09:58.886 END TEST nvmf_bdevio 00:09:58.886 ************************************ 00:09:58.886 14:23:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:58.886 00:09:58.886 real 5m1.193s 00:09:58.886 user 11m39.846s 00:09:58.886 sys 1m47.789s 00:09:58.886 14:23:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.886 14:23:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.886 ************************************ 00:09:58.886 END TEST nvmf_target_core 00:09:58.886 ************************************ 00:09:59.147 14:23:39 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:59.147 14:23:39 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:59.147 14:23:39 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.147 14:23:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:59.147 ************************************ 00:09:59.147 START TEST nvmf_target_extra 00:09:59.147 ************************************ 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:59.147 * Looking for test storage... 00:09:59.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:59.147 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.408 14:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:59.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.408 --rc genhtml_branch_coverage=1 00:09:59.408 --rc genhtml_function_coverage=1 00:09:59.408 --rc genhtml_legend=1 00:09:59.408 --rc geninfo_all_blocks=1 00:09:59.408 --rc geninfo_unexecuted_blocks=1 00:09:59.408 00:09:59.408 ' 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:59.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.409 --rc genhtml_branch_coverage=1 00:09:59.409 --rc genhtml_function_coverage=1 00:09:59.409 --rc genhtml_legend=1 00:09:59.409 --rc geninfo_all_blocks=1 00:09:59.409 --rc geninfo_unexecuted_blocks=1 00:09:59.409 00:09:59.409 ' 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:59.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.409 --rc genhtml_branch_coverage=1 00:09:59.409 --rc genhtml_function_coverage=1 00:09:59.409 --rc genhtml_legend=1 00:09:59.409 --rc geninfo_all_blocks=1 00:09:59.409 --rc geninfo_unexecuted_blocks=1 00:09:59.409 00:09:59.409 ' 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:59.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.409 --rc genhtml_branch_coverage=1 00:09:59.409 --rc genhtml_function_coverage=1 00:09:59.409 --rc genhtml_legend=1 00:09:59.409 --rc geninfo_all_blocks=1 00:09:59.409 --rc geninfo_unexecuted_blocks=1 00:09:59.409 00:09:59.409 ' 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:59.409 ************************************ 00:09:59.409 START TEST nvmf_example 00:09:59.409 ************************************ 00:09:59.409 14:23:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:59.409 * Looking for test storage... 00:09:59.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:59.409 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:59.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.671 --rc genhtml_branch_coverage=1 00:09:59.671 --rc genhtml_function_coverage=1 00:09:59.671 --rc genhtml_legend=1 00:09:59.671 --rc geninfo_all_blocks=1 00:09:59.671 --rc geninfo_unexecuted_blocks=1 00:09:59.671 00:09:59.671 ' 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:59.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.671 --rc genhtml_branch_coverage=1 00:09:59.671 --rc genhtml_function_coverage=1 00:09:59.671 --rc genhtml_legend=1 00:09:59.671 --rc geninfo_all_blocks=1 00:09:59.671 --rc geninfo_unexecuted_blocks=1 00:09:59.671 00:09:59.671 ' 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:59.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.671 --rc genhtml_branch_coverage=1 00:09:59.671 --rc genhtml_function_coverage=1 00:09:59.671 --rc genhtml_legend=1 00:09:59.671 --rc geninfo_all_blocks=1 00:09:59.671 --rc geninfo_unexecuted_blocks=1 00:09:59.671 00:09:59.671 ' 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:59.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.671 --rc genhtml_branch_coverage=1 00:09:59.671 --rc genhtml_function_coverage=1 00:09:59.671 --rc genhtml_legend=1 00:09:59.671 --rc geninfo_all_blocks=1 00:09:59.671 --rc geninfo_unexecuted_blocks=1 00:09:59.671 00:09:59.671 ' 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.671 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:59.672 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:59.672 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:59.672 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.672 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.672 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.672 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:59.672 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:59.672 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:59.672 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:07.820 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:07.820 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:07.820 Found net devices under 0000:31:00.0: cvl_0_0 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:07.820 Found net devices under 0000:31:00.1: cvl_0_1 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:10:07.820 00:10:07.820 --- 10.0.0.2 ping statistics --- 00:10:07.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.820 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:10:07.820 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:10:07.821 00:10:07.821 --- 10.0.0.1 ping statistics --- 00:10:07.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.821 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3261159 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3261159 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3261159 ']' 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.821 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:07.821 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:20.055 Initializing NVMe Controllers 00:10:20.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:20.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:20.055 Initialization complete. Launching workers. 00:10:20.055 ======================================================== 00:10:20.055 Latency(us) 00:10:20.055 Device Information : IOPS MiB/s Average min max 00:10:20.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18050.02 70.51 3545.17 637.11 16021.45 00:10:20.055 ======================================================== 00:10:20.055 Total : 18050.02 70.51 3545.17 637.11 16021.45 00:10:20.055 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:20.055 rmmod nvme_tcp 00:10:20.055 rmmod nvme_fabrics 00:10:20.055 rmmod nvme_keyring 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 3261159 ']' 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 3261159 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3261159 ']' 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3261159 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3261159 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3261159' 00:10:20.055 killing process with pid 3261159 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3261159 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3261159 00:10:20.055 nvmf threads initialize successfully 00:10:20.055 bdev subsystem init successfully 00:10:20.055 created a nvmf target service 00:10:20.055 create targets's poll groups done 00:10:20.055 all subsystems of target started 00:10:20.055 nvmf target is running 00:10:20.055 all subsystems of target stopped 00:10:20.055 destroy targets's poll groups done 00:10:20.055 destroyed the nvmf target service 00:10:20.055 bdev subsystem finish successfully 00:10:20.055 nvmf threads destroy successfully 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.055 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.317 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:20.317 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:20.317 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:20.317 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.578 00:10:20.578 real 0m21.099s 00:10:20.578 user 0m46.359s 00:10:20.578 sys 0m6.679s 00:10:20.578 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.578 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.578 ************************************ 00:10:20.578 END TEST nvmf_example 00:10:20.578 ************************************ 00:10:20.578 14:24:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:20.578 14:24:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:20.578 14:24:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.579 14:24:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:20.579 ************************************ 00:10:20.579 START TEST nvmf_filesystem 00:10:20.579 ************************************ 00:10:20.579 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:20.579 * Looking for test storage... 00:10:20.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.579 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:20.579 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:20.579 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:20.842 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:20.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.843 --rc genhtml_branch_coverage=1 00:10:20.843 --rc genhtml_function_coverage=1 00:10:20.843 --rc genhtml_legend=1 00:10:20.843 --rc geninfo_all_blocks=1 00:10:20.843 --rc geninfo_unexecuted_blocks=1 00:10:20.843 00:10:20.843 ' 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:20.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.843 --rc genhtml_branch_coverage=1 00:10:20.843 --rc genhtml_function_coverage=1 00:10:20.843 --rc genhtml_legend=1 00:10:20.843 --rc geninfo_all_blocks=1 00:10:20.843 --rc geninfo_unexecuted_blocks=1 00:10:20.843 00:10:20.843 ' 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:20.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.843 --rc genhtml_branch_coverage=1 00:10:20.843 --rc genhtml_function_coverage=1 00:10:20.843 --rc genhtml_legend=1 00:10:20.843 --rc geninfo_all_blocks=1 00:10:20.843 --rc geninfo_unexecuted_blocks=1 00:10:20.843 00:10:20.843 ' 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:20.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.843 --rc genhtml_branch_coverage=1 00:10:20.843 --rc genhtml_function_coverage=1 00:10:20.843 --rc genhtml_legend=1 00:10:20.843 --rc geninfo_all_blocks=1 00:10:20.843 --rc geninfo_unexecuted_blocks=1 00:10:20.843 00:10:20.843 ' 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:20.843 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:20.844 #define SPDK_CONFIG_H 00:10:20.844 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:20.844 #define SPDK_CONFIG_APPS 1 00:10:20.844 #define SPDK_CONFIG_ARCH native 00:10:20.844 #undef SPDK_CONFIG_ASAN 00:10:20.844 #undef SPDK_CONFIG_AVAHI 00:10:20.844 #undef SPDK_CONFIG_CET 00:10:20.844 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:20.844 #define SPDK_CONFIG_COVERAGE 1 00:10:20.844 #define SPDK_CONFIG_CROSS_PREFIX 00:10:20.844 #undef SPDK_CONFIG_CRYPTO 00:10:20.844 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:20.844 #undef SPDK_CONFIG_CUSTOMOCF 00:10:20.844 #undef SPDK_CONFIG_DAOS 00:10:20.844 #define SPDK_CONFIG_DAOS_DIR 00:10:20.844 #define SPDK_CONFIG_DEBUG 1 00:10:20.844 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:20.844 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:20.844 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:20.844 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:20.844 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:20.844 #undef SPDK_CONFIG_DPDK_UADK 00:10:20.844 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:20.844 #define SPDK_CONFIG_EXAMPLES 1 00:10:20.844 #undef SPDK_CONFIG_FC 00:10:20.844 #define SPDK_CONFIG_FC_PATH 00:10:20.844 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:20.844 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:20.844 #define SPDK_CONFIG_FSDEV 1 00:10:20.844 #undef SPDK_CONFIG_FUSE 00:10:20.844 #undef SPDK_CONFIG_FUZZER 00:10:20.844 #define SPDK_CONFIG_FUZZER_LIB 00:10:20.844 #undef SPDK_CONFIG_GOLANG 00:10:20.844 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:20.844 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:20.844 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:20.844 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:20.844 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:20.844 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:20.844 #undef SPDK_CONFIG_HAVE_LZ4 00:10:20.844 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:20.844 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:20.844 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:20.844 #define SPDK_CONFIG_IDXD 1 00:10:20.844 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:20.844 #undef SPDK_CONFIG_IPSEC_MB 00:10:20.844 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:20.844 #define SPDK_CONFIG_ISAL 1 00:10:20.844 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:20.844 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:20.844 #define SPDK_CONFIG_LIBDIR 00:10:20.844 #undef SPDK_CONFIG_LTO 00:10:20.844 #define SPDK_CONFIG_MAX_LCORES 128 00:10:20.844 #define SPDK_CONFIG_NVME_CUSE 1 00:10:20.844 #undef SPDK_CONFIG_OCF 00:10:20.844 #define SPDK_CONFIG_OCF_PATH 00:10:20.844 #define SPDK_CONFIG_OPENSSL_PATH 00:10:20.844 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:20.844 #define SPDK_CONFIG_PGO_DIR 00:10:20.844 #undef SPDK_CONFIG_PGO_USE 00:10:20.844 #define SPDK_CONFIG_PREFIX /usr/local 00:10:20.844 #undef SPDK_CONFIG_RAID5F 00:10:20.844 #undef SPDK_CONFIG_RBD 00:10:20.844 #define SPDK_CONFIG_RDMA 1 00:10:20.844 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:20.844 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:20.844 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:20.844 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:20.844 #define SPDK_CONFIG_SHARED 1 00:10:20.844 #undef SPDK_CONFIG_SMA 00:10:20.844 #define SPDK_CONFIG_TESTS 1 00:10:20.844 #undef SPDK_CONFIG_TSAN 00:10:20.844 #define SPDK_CONFIG_UBLK 1 00:10:20.844 #define SPDK_CONFIG_UBSAN 1 00:10:20.844 #undef SPDK_CONFIG_UNIT_TESTS 00:10:20.844 #undef SPDK_CONFIG_URING 00:10:20.844 #define SPDK_CONFIG_URING_PATH 00:10:20.844 #undef SPDK_CONFIG_URING_ZNS 00:10:20.844 #undef SPDK_CONFIG_USDT 00:10:20.844 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:20.844 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:20.844 #define SPDK_CONFIG_VFIO_USER 1 00:10:20.844 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:20.844 #define SPDK_CONFIG_VHOST 1 00:10:20.844 #define SPDK_CONFIG_VIRTIO 1 00:10:20.844 #undef SPDK_CONFIG_VTUNE 00:10:20.844 #define SPDK_CONFIG_VTUNE_DIR 00:10:20.844 #define SPDK_CONFIG_WERROR 1 00:10:20.844 #define SPDK_CONFIG_WPDK_DIR 00:10:20.844 #undef SPDK_CONFIG_XNVME 00:10:20.844 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:20.844 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:20.845 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:20.846 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 3264022 ]] 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 3264022 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.l2tO4s 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.l2tO4s/tests/target /tmp/spdk.l2tO4s 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=156295168 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5128134656 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122585763840 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356537856 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6770774016 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64666902528 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847894016 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23416832 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=175104 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=328704 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.847 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677675008 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=593920 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:20.848 * Looking for test storage... 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122585763840 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8985366528 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:20.848 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:21.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.110 --rc genhtml_branch_coverage=1 00:10:21.110 --rc genhtml_function_coverage=1 00:10:21.110 --rc genhtml_legend=1 00:10:21.110 --rc geninfo_all_blocks=1 00:10:21.110 --rc geninfo_unexecuted_blocks=1 00:10:21.110 00:10:21.110 ' 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:21.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.110 --rc genhtml_branch_coverage=1 00:10:21.110 --rc genhtml_function_coverage=1 00:10:21.110 --rc genhtml_legend=1 00:10:21.110 --rc geninfo_all_blocks=1 00:10:21.110 --rc geninfo_unexecuted_blocks=1 00:10:21.110 00:10:21.110 ' 00:10:21.110 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:21.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.111 --rc genhtml_branch_coverage=1 00:10:21.111 --rc genhtml_function_coverage=1 00:10:21.111 --rc genhtml_legend=1 00:10:21.111 --rc geninfo_all_blocks=1 00:10:21.111 --rc geninfo_unexecuted_blocks=1 00:10:21.111 00:10:21.111 ' 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:21.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.111 --rc genhtml_branch_coverage=1 00:10:21.111 --rc genhtml_function_coverage=1 00:10:21.111 --rc genhtml_legend=1 00:10:21.111 --rc geninfo_all_blocks=1 00:10:21.111 --rc geninfo_unexecuted_blocks=1 00:10:21.111 00:10:21.111 ' 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:21.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:21.111 14:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:29.258 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:29.258 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:29.258 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:29.259 Found net devices under 0000:31:00.0: cvl_0_0 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:29.259 Found net devices under 0000:31:00.1: cvl_0_1 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:29.259 14:24:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:29.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:10:29.259 00:10:29.259 --- 10.0.0.2 ping statistics --- 00:10:29.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.259 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:10:29.259 00:10:29.259 --- 10.0.0.1 ping statistics --- 00:10:29.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.259 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.259 ************************************ 00:10:29.259 START TEST nvmf_filesystem_no_in_capsule 00:10:29.259 ************************************ 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=3268211 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 3268211 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3268211 ']' 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.259 14:24:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.259 [2024-10-14 14:24:09.235272] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:10:29.259 [2024-10-14 14:24:09.235320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.259 [2024-10-14 14:24:09.303622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.259 [2024-10-14 14:24:09.340227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.259 [2024-10-14 14:24:09.340260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.259 [2024-10-14 14:24:09.340268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.259 [2024-10-14 14:24:09.340275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.259 [2024-10-14 14:24:09.340281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.259 [2024-10-14 14:24:09.342036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.259 [2024-10-14 14:24:09.342178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.259 [2024-10-14 14:24:09.342229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.259 [2024-10-14 14:24:09.342230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.520 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.520 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:29.520 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:29.520 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:29.520 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.520 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.520 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:29.520 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:29.520 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.521 [2024-10-14 14:24:10.080068] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.521 Malloc1 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.521 [2024-10-14 14:24:10.217826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:29.521 { 00:10:29.521 "name": "Malloc1", 00:10:29.521 "aliases": [ 00:10:29.521 "b856bd56-a849-4bf4-84f2-b1eed40d14d0" 00:10:29.521 ], 00:10:29.521 "product_name": "Malloc disk", 00:10:29.521 "block_size": 512, 00:10:29.521 "num_blocks": 1048576, 00:10:29.521 "uuid": "b856bd56-a849-4bf4-84f2-b1eed40d14d0", 00:10:29.521 "assigned_rate_limits": { 00:10:29.521 "rw_ios_per_sec": 0, 00:10:29.521 "rw_mbytes_per_sec": 0, 00:10:29.521 "r_mbytes_per_sec": 0, 00:10:29.521 "w_mbytes_per_sec": 0 00:10:29.521 }, 00:10:29.521 "claimed": true, 00:10:29.521 "claim_type": "exclusive_write", 00:10:29.521 "zoned": false, 00:10:29.521 "supported_io_types": { 00:10:29.521 "read": true, 00:10:29.521 "write": true, 00:10:29.521 "unmap": true, 00:10:29.521 "flush": true, 00:10:29.521 "reset": true, 00:10:29.521 "nvme_admin": false, 00:10:29.521 "nvme_io": false, 00:10:29.521 "nvme_io_md": false, 00:10:29.521 "write_zeroes": true, 00:10:29.521 "zcopy": true, 00:10:29.521 "get_zone_info": false, 00:10:29.521 "zone_management": false, 00:10:29.521 "zone_append": false, 00:10:29.521 "compare": false, 00:10:29.521 "compare_and_write": false, 00:10:29.521 "abort": true, 00:10:29.521 "seek_hole": false, 00:10:29.521 "seek_data": false, 00:10:29.521 "copy": true, 00:10:29.521 "nvme_iov_md": false 00:10:29.521 }, 00:10:29.521 "memory_domains": [ 00:10:29.521 { 00:10:29.521 "dma_device_id": "system", 00:10:29.521 "dma_device_type": 1 00:10:29.521 }, 00:10:29.521 { 00:10:29.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.521 "dma_device_type": 2 00:10:29.521 } 00:10:29.521 ], 00:10:29.521 "driver_specific": {} 00:10:29.521 } 00:10:29.521 ]' 00:10:29.521 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:29.782 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:29.782 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:29.782 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:29.782 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:29.782 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:29.782 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:29.782 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:31.167 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:31.167 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:31.167 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.167 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:31.167 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:33.712 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:33.712 14:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:33.712 14:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:34.654 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:34.654 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:34.654 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:34.654 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.654 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.915 ************************************ 00:10:34.915 START TEST filesystem_ext4 00:10:34.915 ************************************ 00:10:34.915 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:34.915 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:34.915 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:34.915 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:34.915 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:34.915 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:34.915 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:34.915 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:34.915 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:34.915 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:34.915 14:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:34.915 mke2fs 1.47.0 (5-Feb-2023) 00:10:34.915 Discarding device blocks: 0/522240 done 00:10:34.915 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:34.915 Filesystem UUID: 8dd0e74b-f400-44ed-b884-673de0074c80 00:10:34.915 Superblock backups stored on blocks: 00:10:34.915 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:34.915 00:10:34.915 Allocating group tables: 0/64 done 00:10:34.915 Writing inode tables: 0/64 done 00:10:36.828 Creating journal (8192 blocks): done 00:10:37.400 Writing superblocks and filesystem accounting information: 0/64 done 00:10:37.400 00:10:37.400 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:37.400 14:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:43.985 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:43.985 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:43.985 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:43.985 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:43.985 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:43.985 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:43.985 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3268211 00:10:43.985 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:43.985 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:43.985 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:43.985 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:43.985 00:10:43.985 real 0m8.567s 00:10:43.985 user 0m0.026s 00:10:43.985 sys 0m0.081s 00:10:43.985 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.985 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:43.985 ************************************ 00:10:43.985 END TEST filesystem_ext4 00:10:43.985 ************************************ 00:10:43.985 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:43.985 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:43.985 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.985 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:43.985 ************************************ 00:10:43.985 START TEST filesystem_btrfs 00:10:43.985 ************************************ 00:10:43.985 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:43.985 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:43.985 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:43.985 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:43.985 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:43.985 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:43.985 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:43.985 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:43.985 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:43.986 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:43.986 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:43.986 btrfs-progs v6.8.1 00:10:43.986 See https://btrfs.readthedocs.io for more information. 00:10:43.986 00:10:43.986 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:43.986 NOTE: several default settings have changed in version 5.15, please make sure 00:10:43.986 this does not affect your deployments: 00:10:43.986 - DUP for metadata (-m dup) 00:10:43.986 - enabled no-holes (-O no-holes) 00:10:43.986 - enabled free-space-tree (-R free-space-tree) 00:10:43.986 00:10:43.986 Label: (null) 00:10:43.986 UUID: 5fd9e3ca-95b4-4837-af07-4df1e2a0e535 00:10:43.986 Node size: 16384 00:10:43.986 Sector size: 4096 (CPU page size: 4096) 00:10:43.986 Filesystem size: 510.00MiB 00:10:43.986 Block group profiles: 00:10:43.986 Data: single 8.00MiB 00:10:43.986 Metadata: DUP 32.00MiB 00:10:43.986 System: DUP 8.00MiB 00:10:43.986 SSD detected: yes 00:10:43.986 Zoned device: no 00:10:43.986 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:43.986 Checksum: crc32c 00:10:43.986 Number of devices: 1 00:10:43.986 Devices: 00:10:43.986 ID SIZE PATH 00:10:43.986 1 510.00MiB /dev/nvme0n1p1 00:10:43.986 00:10:43.986 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:43.986 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:44.247 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:44.247 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:44.247 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:44.247 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:44.247 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:44.247 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:44.247 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3268211 00:10:44.247 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:44.247 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:44.247 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:44.247 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:44.247 00:10:44.247 real 0m0.890s 00:10:44.247 user 0m0.021s 00:10:44.247 sys 0m0.127s 00:10:44.247 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:44.247 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:44.247 ************************************ 00:10:44.247 END TEST filesystem_btrfs 00:10:44.247 ************************************ 00:10:44.508 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:44.508 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:44.508 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:44.508 14:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.508 ************************************ 00:10:44.508 START TEST filesystem_xfs 00:10:44.508 ************************************ 00:10:44.508 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:44.508 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:44.508 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:44.508 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:44.508 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:44.508 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:44.508 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:44.508 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:44.508 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:44.508 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:44.508 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:44.508 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:44.508 = sectsz=512 attr=2, projid32bit=1 00:10:44.508 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:44.508 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:44.508 data = bsize=4096 blocks=130560, imaxpct=25 00:10:44.508 = sunit=0 swidth=0 blks 00:10:44.508 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:44.508 log =internal log bsize=4096 blocks=16384, version=2 00:10:44.508 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:44.508 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:45.449 Discarding blocks...Done. 00:10:45.449 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:45.449 14:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:47.992 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:47.992 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:47.992 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:47.992 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:47.992 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:47.992 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:47.992 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3268211 00:10:47.992 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:47.992 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:47.992 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:47.992 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:48.253 00:10:48.253 real 0m3.709s 00:10:48.253 user 0m0.029s 00:10:48.253 sys 0m0.077s 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:48.253 ************************************ 00:10:48.253 END TEST filesystem_xfs 00:10:48.253 ************************************ 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3268211 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3268211 ']' 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3268211 00:10:48.253 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:48.514 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:48.514 14:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3268211 00:10:48.514 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:48.514 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:48.514 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3268211' 00:10:48.514 killing process with pid 3268211 00:10:48.514 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3268211 00:10:48.514 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3268211 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:48.775 00:10:48.775 real 0m20.091s 00:10:48.775 user 1m19.471s 00:10:48.775 sys 0m1.474s 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.775 ************************************ 00:10:48.775 END TEST nvmf_filesystem_no_in_capsule 00:10:48.775 ************************************ 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.775 ************************************ 00:10:48.775 START TEST nvmf_filesystem_in_capsule 00:10:48.775 ************************************ 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=3272481 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 3272481 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3272481 ']' 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.775 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:48.775 [2024-10-14 14:24:29.367397] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:10:48.775 [2024-10-14 14:24:29.367430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.775 [2024-10-14 14:24:29.424837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.775 [2024-10-14 14:24:29.460851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.775 [2024-10-14 14:24:29.460882] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.775 [2024-10-14 14:24:29.460889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.775 [2024-10-14 14:24:29.460896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.775 [2024-10-14 14:24:29.460902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.775 [2024-10-14 14:24:29.462469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.775 [2024-10-14 14:24:29.462585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.775 [2024-10-14 14:24:29.462740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.775 [2024-10-14 14:24:29.462741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.037 [2024-10-14 14:24:29.590616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.037 Malloc1 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.037 [2024-10-14 14:24:29.720897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:49.037 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:49.038 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:49.038 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:49.038 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.038 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:49.038 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.038 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:49.038 { 00:10:49.038 "name": "Malloc1", 00:10:49.038 "aliases": [ 00:10:49.038 "69b86581-7b81-4ab2-af29-99187285bbf1" 00:10:49.038 ], 00:10:49.038 "product_name": "Malloc disk", 00:10:49.038 "block_size": 512, 00:10:49.038 "num_blocks": 1048576, 00:10:49.038 "uuid": "69b86581-7b81-4ab2-af29-99187285bbf1", 00:10:49.038 "assigned_rate_limits": { 00:10:49.038 "rw_ios_per_sec": 0, 00:10:49.038 "rw_mbytes_per_sec": 0, 00:10:49.038 "r_mbytes_per_sec": 0, 00:10:49.038 "w_mbytes_per_sec": 0 00:10:49.038 }, 00:10:49.038 "claimed": true, 00:10:49.038 "claim_type": "exclusive_write", 00:10:49.038 "zoned": false, 00:10:49.038 "supported_io_types": { 00:10:49.038 "read": true, 00:10:49.038 "write": true, 00:10:49.038 "unmap": true, 00:10:49.038 "flush": true, 00:10:49.038 "reset": true, 00:10:49.038 "nvme_admin": false, 00:10:49.038 "nvme_io": false, 00:10:49.038 "nvme_io_md": false, 00:10:49.038 "write_zeroes": true, 00:10:49.038 "zcopy": true, 00:10:49.038 "get_zone_info": false, 00:10:49.038 "zone_management": false, 00:10:49.038 "zone_append": false, 00:10:49.038 "compare": false, 00:10:49.038 "compare_and_write": false, 00:10:49.038 "abort": true, 00:10:49.038 "seek_hole": false, 00:10:49.038 "seek_data": false, 00:10:49.038 "copy": true, 00:10:49.038 "nvme_iov_md": false 00:10:49.038 }, 00:10:49.038 "memory_domains": [ 00:10:49.038 { 00:10:49.038 "dma_device_id": "system", 00:10:49.038 "dma_device_type": 1 00:10:49.038 }, 00:10:49.038 { 00:10:49.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.038 "dma_device_type": 2 00:10:49.038 } 00:10:49.038 ], 00:10:49.038 "driver_specific": {} 00:10:49.038 } 00:10:49.038 ]' 00:10:49.038 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:49.298 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:49.298 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:49.298 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:49.298 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:49.298 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:49.298 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:49.298 14:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:50.683 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:50.943 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:50.943 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:50.943 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:50.943 14:24:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:52.856 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:53.117 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:53.377 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.761 ************************************ 00:10:54.761 START TEST filesystem_in_capsule_ext4 00:10:54.761 ************************************ 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:54.761 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:54.761 mke2fs 1.47.0 (5-Feb-2023) 00:10:54.761 Discarding device blocks: 0/522240 done 00:10:54.761 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:54.761 Filesystem UUID: 1fd6491f-83b7-4fd8-b44a-81b3a64e5345 00:10:54.761 Superblock backups stored on blocks: 00:10:54.761 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:54.761 00:10:54.761 Allocating group tables: 0/64 done 00:10:54.761 Writing inode tables: 0/64 done 00:10:57.304 Creating journal (8192 blocks): done 00:10:57.304 Writing superblocks and filesystem accounting information: 0/64 done 00:10:57.304 00:10:57.563 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:57.564 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3272481 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:04.149 00:11:04.149 real 0m8.589s 00:11:04.149 user 0m0.028s 00:11:04.149 sys 0m0.078s 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:04.149 ************************************ 00:11:04.149 END TEST filesystem_in_capsule_ext4 00:11:04.149 ************************************ 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.149 ************************************ 00:11:04.149 START TEST filesystem_in_capsule_btrfs 00:11:04.149 ************************************ 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:04.149 14:24:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:04.149 btrfs-progs v6.8.1 00:11:04.149 See https://btrfs.readthedocs.io for more information. 00:11:04.149 00:11:04.149 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:04.149 NOTE: several default settings have changed in version 5.15, please make sure 00:11:04.149 this does not affect your deployments: 00:11:04.149 - DUP for metadata (-m dup) 00:11:04.149 - enabled no-holes (-O no-holes) 00:11:04.149 - enabled free-space-tree (-R free-space-tree) 00:11:04.149 00:11:04.149 Label: (null) 00:11:04.149 UUID: e78ee4c6-e6b8-41e6-b7d4-6bf168dcfd09 00:11:04.149 Node size: 16384 00:11:04.149 Sector size: 4096 (CPU page size: 4096) 00:11:04.149 Filesystem size: 510.00MiB 00:11:04.149 Block group profiles: 00:11:04.149 Data: single 8.00MiB 00:11:04.149 Metadata: DUP 32.00MiB 00:11:04.149 System: DUP 8.00MiB 00:11:04.149 SSD detected: yes 00:11:04.149 Zoned device: no 00:11:04.149 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:04.149 Checksum: crc32c 00:11:04.149 Number of devices: 1 00:11:04.149 Devices: 00:11:04.149 ID SIZE PATH 00:11:04.149 1 510.00MiB /dev/nvme0n1p1 00:11:04.149 00:11:04.149 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:04.149 14:24:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3272481 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:04.413 00:11:04.413 real 0m1.316s 00:11:04.413 user 0m0.026s 00:11:04.413 sys 0m0.122s 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:04.413 ************************************ 00:11:04.413 END TEST filesystem_in_capsule_btrfs 00:11:04.413 ************************************ 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.413 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:04.674 ************************************ 00:11:04.674 START TEST filesystem_in_capsule_xfs 00:11:04.674 ************************************ 00:11:04.674 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:04.674 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:04.674 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:04.674 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:04.674 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:04.674 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:04.674 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:04.674 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:04.674 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:04.674 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:04.674 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:04.674 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:04.674 = sectsz=512 attr=2, projid32bit=1 00:11:04.674 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:04.674 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:04.674 data = bsize=4096 blocks=130560, imaxpct=25 00:11:04.674 = sunit=0 swidth=0 blks 00:11:04.674 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:04.674 log =internal log bsize=4096 blocks=16384, version=2 00:11:04.674 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:04.674 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:05.616 Discarding blocks...Done. 00:11:05.616 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:05.616 14:24:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3272481 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:08.159 00:11:08.159 real 0m3.302s 00:11:08.159 user 0m0.035s 00:11:08.159 sys 0m0.073s 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:08.159 ************************************ 00:11:08.159 END TEST filesystem_in_capsule_xfs 00:11:08.159 ************************************ 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:08.159 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3272481 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3272481 ']' 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3272481 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3272481 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3272481' 00:11:08.160 killing process with pid 3272481 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3272481 00:11:08.160 14:24:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3272481 00:11:08.421 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:08.421 00:11:08.421 real 0m19.723s 00:11:08.421 user 1m18.039s 00:11:08.421 sys 0m1.378s 00:11:08.421 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.421 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:08.421 ************************************ 00:11:08.421 END TEST nvmf_filesystem_in_capsule 00:11:08.421 ************************************ 00:11:08.421 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:08.421 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:08.421 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:08.421 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.421 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:08.421 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.421 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.421 rmmod nvme_tcp 00:11:08.421 rmmod nvme_fabrics 00:11:08.421 rmmod nvme_keyring 00:11:08.421 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.681 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:08.681 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:08.682 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:08.682 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:08.682 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:08.682 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:08.682 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:08.682 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:11:08.682 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:08.682 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:08.682 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.682 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.682 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.682 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.682 14:24:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:10.813 00:11:10.813 real 0m50.095s 00:11:10.813 user 2m39.962s 00:11:10.813 sys 0m8.634s 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:10.813 ************************************ 00:11:10.813 END TEST nvmf_filesystem 00:11:10.813 ************************************ 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:10.813 ************************************ 00:11:10.813 START TEST nvmf_target_discovery 00:11:10.813 ************************************ 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:10.813 * Looking for test storage... 00:11:10.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.813 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:10.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.814 --rc genhtml_branch_coverage=1 00:11:10.814 --rc genhtml_function_coverage=1 00:11:10.814 --rc genhtml_legend=1 00:11:10.814 --rc geninfo_all_blocks=1 00:11:10.814 --rc geninfo_unexecuted_blocks=1 00:11:10.814 00:11:10.814 ' 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:10.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.814 --rc genhtml_branch_coverage=1 00:11:10.814 --rc genhtml_function_coverage=1 00:11:10.814 --rc genhtml_legend=1 00:11:10.814 --rc geninfo_all_blocks=1 00:11:10.814 --rc geninfo_unexecuted_blocks=1 00:11:10.814 00:11:10.814 ' 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:10.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.814 --rc genhtml_branch_coverage=1 00:11:10.814 --rc genhtml_function_coverage=1 00:11:10.814 --rc genhtml_legend=1 00:11:10.814 --rc geninfo_all_blocks=1 00:11:10.814 --rc geninfo_unexecuted_blocks=1 00:11:10.814 00:11:10.814 ' 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:10.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.814 --rc genhtml_branch_coverage=1 00:11:10.814 --rc genhtml_function_coverage=1 00:11:10.814 --rc genhtml_legend=1 00:11:10.814 --rc geninfo_all_blocks=1 00:11:10.814 --rc geninfo_unexecuted_blocks=1 00:11:10.814 00:11:10.814 ' 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:10.814 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:11.076 14:24:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:19.220 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:19.220 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:19.220 Found net devices under 0000:31:00.0: cvl_0_0 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:19.220 Found net devices under 0000:31:00.1: cvl_0_1 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:19.220 14:24:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:19.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:11:19.220 00:11:19.220 --- 10.0.0.2 ping statistics --- 00:11:19.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.220 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:11:19.220 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:11:19.220 00:11:19.220 --- 10.0.0.1 ping statistics --- 00:11:19.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.221 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=3280939 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 3280939 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3280939 ']' 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.221 [2024-10-14 14:24:59.126658] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:11:19.221 [2024-10-14 14:24:59.126730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.221 [2024-10-14 14:24:59.202543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.221 [2024-10-14 14:24:59.246051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.221 [2024-10-14 14:24:59.246096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.221 [2024-10-14 14:24:59.246104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.221 [2024-10-14 14:24:59.246111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.221 [2024-10-14 14:24:59.246116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.221 [2024-10-14 14:24:59.248041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.221 [2024-10-14 14:24:59.248163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.221 [2024-10-14 14:24:59.248208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.221 [2024-10-14 14:24:59.248208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:19.221 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.482 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.482 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 [2024-10-14 14:24:59.983585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.482 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:19.482 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.482 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:19.482 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:24:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 Null1 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 [2024-10-14 14:25:00.043927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 Null2 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 Null3 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 Null4 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.482 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.483 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.483 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:19.483 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.483 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.483 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.483 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:19.483 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.483 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.744 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.744 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:11:19.744 00:11:19.744 Discovery Log Number of Records 6, Generation counter 6 00:11:19.744 =====Discovery Log Entry 0====== 00:11:19.744 trtype: tcp 00:11:19.744 adrfam: ipv4 00:11:19.744 subtype: current discovery subsystem 00:11:19.744 treq: not required 00:11:19.744 portid: 0 00:11:19.744 trsvcid: 4420 00:11:19.744 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:19.744 traddr: 10.0.0.2 00:11:19.744 eflags: explicit discovery connections, duplicate discovery information 00:11:19.744 sectype: none 00:11:19.744 =====Discovery Log Entry 1====== 00:11:19.744 trtype: tcp 00:11:19.744 adrfam: ipv4 00:11:19.744 subtype: nvme subsystem 00:11:19.744 treq: not required 00:11:19.744 portid: 0 00:11:19.744 trsvcid: 4420 00:11:19.744 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:19.744 traddr: 10.0.0.2 00:11:19.744 eflags: none 00:11:19.744 sectype: none 00:11:19.744 =====Discovery Log Entry 2====== 00:11:19.744 trtype: tcp 00:11:19.744 adrfam: ipv4 00:11:19.744 subtype: nvme subsystem 00:11:19.744 treq: not required 00:11:19.744 portid: 0 00:11:19.744 trsvcid: 4420 00:11:19.744 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:19.744 traddr: 10.0.0.2 00:11:19.744 eflags: none 00:11:19.744 sectype: none 00:11:19.744 =====Discovery Log Entry 3====== 00:11:19.744 trtype: tcp 00:11:19.744 adrfam: ipv4 00:11:19.744 subtype: nvme subsystem 00:11:19.744 treq: not required 00:11:19.744 portid: 0 00:11:19.744 trsvcid: 4420 00:11:19.744 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:19.744 traddr: 10.0.0.2 00:11:19.744 eflags: none 00:11:19.744 sectype: none 00:11:19.744 =====Discovery Log Entry 4====== 00:11:19.744 trtype: tcp 00:11:19.744 adrfam: ipv4 00:11:19.744 subtype: nvme subsystem 00:11:19.744 treq: not required 00:11:19.744 portid: 0 00:11:19.744 trsvcid: 4420 00:11:19.744 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:19.744 traddr: 10.0.0.2 00:11:19.744 eflags: none 00:11:19.744 sectype: none 00:11:19.744 =====Discovery Log Entry 5====== 00:11:19.744 trtype: tcp 00:11:19.744 adrfam: ipv4 00:11:19.744 subtype: discovery subsystem referral 00:11:19.744 treq: not required 00:11:19.744 portid: 0 00:11:19.744 trsvcid: 4430 00:11:19.744 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:19.744 traddr: 10.0.0.2 00:11:19.744 eflags: none 00:11:19.744 sectype: none 00:11:19.744 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:19.744 Perform nvmf subsystem discovery via RPC 00:11:19.744 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:19.744 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.744 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:19.744 [ 00:11:19.744 { 00:11:19.744 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:19.744 "subtype": "Discovery", 00:11:19.744 "listen_addresses": [ 00:11:19.744 { 00:11:19.744 "trtype": "TCP", 00:11:19.744 "adrfam": "IPv4", 00:11:19.744 "traddr": "10.0.0.2", 00:11:19.744 "trsvcid": "4420" 00:11:19.744 } 00:11:19.744 ], 00:11:19.744 "allow_any_host": true, 00:11:19.744 "hosts": [] 00:11:19.744 }, 00:11:19.744 { 00:11:19.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.744 "subtype": "NVMe", 00:11:19.744 "listen_addresses": [ 00:11:19.744 { 00:11:19.744 "trtype": "TCP", 00:11:19.744 "adrfam": "IPv4", 00:11:19.744 "traddr": "10.0.0.2", 00:11:19.744 "trsvcid": "4420" 00:11:19.744 } 00:11:19.744 ], 00:11:19.744 "allow_any_host": true, 00:11:19.744 "hosts": [], 00:11:19.744 "serial_number": "SPDK00000000000001", 00:11:19.744 "model_number": "SPDK bdev Controller", 00:11:19.744 "max_namespaces": 32, 00:11:19.744 "min_cntlid": 1, 00:11:19.744 "max_cntlid": 65519, 00:11:19.744 "namespaces": [ 00:11:19.744 { 00:11:19.744 "nsid": 1, 00:11:19.744 "bdev_name": "Null1", 00:11:19.744 "name": "Null1", 00:11:19.744 "nguid": "830E93DA03AF4F0183E24E883FFC38EE", 00:11:19.744 "uuid": "830e93da-03af-4f01-83e2-4e883ffc38ee" 00:11:19.744 } 00:11:19.744 ] 00:11:19.744 }, 00:11:19.744 { 00:11:19.744 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:19.744 "subtype": "NVMe", 00:11:19.744 "listen_addresses": [ 00:11:19.744 { 00:11:19.744 "trtype": "TCP", 00:11:19.744 "adrfam": "IPv4", 00:11:19.744 "traddr": "10.0.0.2", 00:11:19.744 "trsvcid": "4420" 00:11:19.744 } 00:11:19.744 ], 00:11:19.744 "allow_any_host": true, 00:11:19.744 "hosts": [], 00:11:19.744 "serial_number": "SPDK00000000000002", 00:11:19.744 "model_number": "SPDK bdev Controller", 00:11:19.744 "max_namespaces": 32, 00:11:19.744 "min_cntlid": 1, 00:11:19.744 "max_cntlid": 65519, 00:11:19.744 "namespaces": [ 00:11:19.744 { 00:11:19.744 "nsid": 1, 00:11:19.744 "bdev_name": "Null2", 00:11:19.744 "name": "Null2", 00:11:19.745 "nguid": "2AAE915024F0477BBC4AA92C474A8473", 00:11:19.745 "uuid": "2aae9150-24f0-477b-bc4a-a92c474a8473" 00:11:19.745 } 00:11:19.745 ] 00:11:19.745 }, 00:11:19.745 { 00:11:19.745 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:19.745 "subtype": "NVMe", 00:11:19.745 "listen_addresses": [ 00:11:19.745 { 00:11:19.745 "trtype": "TCP", 00:11:19.745 "adrfam": "IPv4", 00:11:19.745 "traddr": "10.0.0.2", 00:11:19.745 "trsvcid": "4420" 00:11:19.745 } 00:11:19.745 ], 00:11:19.745 "allow_any_host": true, 00:11:19.745 "hosts": [], 00:11:19.745 "serial_number": "SPDK00000000000003", 00:11:19.745 "model_number": "SPDK bdev Controller", 00:11:19.745 "max_namespaces": 32, 00:11:19.745 "min_cntlid": 1, 00:11:19.745 "max_cntlid": 65519, 00:11:19.745 "namespaces": [ 00:11:19.745 { 00:11:19.745 "nsid": 1, 00:11:19.745 "bdev_name": "Null3", 00:11:19.745 "name": "Null3", 00:11:19.745 "nguid": "BCD600FEE9AD4AFDB1F19A6EE0E875E7", 00:11:19.745 "uuid": "bcd600fe-e9ad-4afd-b1f1-9a6ee0e875e7" 00:11:19.745 } 00:11:19.745 ] 00:11:19.745 }, 00:11:19.745 { 00:11:19.745 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:19.745 "subtype": "NVMe", 00:11:19.745 "listen_addresses": [ 00:11:19.745 { 00:11:19.745 "trtype": "TCP", 00:11:19.745 "adrfam": "IPv4", 00:11:19.745 "traddr": "10.0.0.2", 00:11:19.745 "trsvcid": "4420" 00:11:19.745 } 00:11:19.745 ], 00:11:19.745 "allow_any_host": true, 00:11:19.745 "hosts": [], 00:11:19.745 "serial_number": "SPDK00000000000004", 00:11:19.745 "model_number": "SPDK bdev Controller", 00:11:19.745 "max_namespaces": 32, 00:11:19.745 "min_cntlid": 1, 00:11:19.745 "max_cntlid": 65519, 00:11:19.745 "namespaces": [ 00:11:19.745 { 00:11:19.745 "nsid": 1, 00:11:20.006 "bdev_name": "Null4", 00:11:20.006 "name": "Null4", 00:11:20.006 "nguid": "C54B8F84180B40C39D48E64D9909D32F", 00:11:20.006 "uuid": "c54b8f84-180b-40c3-9d48-e64d9909d32f" 00:11:20.006 } 00:11:20.006 ] 00:11:20.006 } 00:11:20.006 ] 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:20.006 rmmod nvme_tcp 00:11:20.006 rmmod nvme_fabrics 00:11:20.006 rmmod nvme_keyring 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 3280939 ']' 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 3280939 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3280939 ']' 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3280939 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.006 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3280939 00:11:20.267 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:20.267 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:20.267 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3280939' 00:11:20.267 killing process with pid 3280939 00:11:20.267 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3280939 00:11:20.267 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3280939 00:11:20.267 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:20.268 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:20.268 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:20.268 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:20.268 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:11:20.268 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:20.268 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:11:20.268 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.268 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:20.268 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.268 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.268 14:25:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.815 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.815 00:11:22.815 real 0m11.651s 00:11:22.815 user 0m8.999s 00:11:22.815 sys 0m6.008s 00:11:22.815 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.815 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.815 ************************************ 00:11:22.815 END TEST nvmf_target_discovery 00:11:22.815 ************************************ 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:22.815 ************************************ 00:11:22.815 START TEST nvmf_referrals 00:11:22.815 ************************************ 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:22.815 * Looking for test storage... 00:11:22.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:22.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.815 --rc genhtml_branch_coverage=1 00:11:22.815 --rc genhtml_function_coverage=1 00:11:22.815 --rc genhtml_legend=1 00:11:22.815 --rc geninfo_all_blocks=1 00:11:22.815 --rc geninfo_unexecuted_blocks=1 00:11:22.815 00:11:22.815 ' 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:22.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.815 --rc genhtml_branch_coverage=1 00:11:22.815 --rc genhtml_function_coverage=1 00:11:22.815 --rc genhtml_legend=1 00:11:22.815 --rc geninfo_all_blocks=1 00:11:22.815 --rc geninfo_unexecuted_blocks=1 00:11:22.815 00:11:22.815 ' 00:11:22.815 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:22.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.815 --rc genhtml_branch_coverage=1 00:11:22.815 --rc genhtml_function_coverage=1 00:11:22.815 --rc genhtml_legend=1 00:11:22.815 --rc geninfo_all_blocks=1 00:11:22.815 --rc geninfo_unexecuted_blocks=1 00:11:22.815 00:11:22.816 ' 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:22.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.816 --rc genhtml_branch_coverage=1 00:11:22.816 --rc genhtml_function_coverage=1 00:11:22.816 --rc genhtml_legend=1 00:11:22.816 --rc geninfo_all_blocks=1 00:11:22.816 --rc geninfo_unexecuted_blocks=1 00:11:22.816 00:11:22.816 ' 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.816 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.964 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.964 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:30.964 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:30.964 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:30.964 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:30.964 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:30.964 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:30.964 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:30.965 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:30.965 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:30.965 Found net devices under 0000:31:00.0: cvl_0_0 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:30.965 Found net devices under 0000:31:00.1: cvl_0_1 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:30.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:11:30.965 00:11:30.965 --- 10.0.0.2 ping statistics --- 00:11:30.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.965 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:11:30.965 00:11:30.965 --- 10.0.0.1 ping statistics --- 00:11:30.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.965 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:30.965 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=3285534 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 3285534 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3285534 ']' 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:30.966 14:25:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.966 [2024-10-14 14:25:10.872666] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:11:30.966 [2024-10-14 14:25:10.872727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.966 [2024-10-14 14:25:10.945439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.966 [2024-10-14 14:25:10.989631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.966 [2024-10-14 14:25:10.989669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.966 [2024-10-14 14:25:10.989677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.966 [2024-10-14 14:25:10.989688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.966 [2024-10-14 14:25:10.989694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.966 [2024-10-14 14:25:10.991412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.966 [2024-10-14 14:25:10.991534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.966 [2024-10-14 14:25:10.991693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.966 [2024-10-14 14:25:10.991693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.966 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:30.966 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:30.966 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:30.966 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:30.966 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.227 [2024-10-14 14:25:11.715657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.227 [2024-10-14 14:25:11.731836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.227 14:25:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.488 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.489 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.750 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:32.010 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:32.010 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:32.010 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:32.010 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:32.010 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:32.010 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.010 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:32.271 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:32.271 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:32.271 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:32.271 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:32.271 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.271 14:25:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:32.531 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:32.531 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:32.531 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.531 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.532 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:32.792 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:32.792 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:32.792 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:32.792 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:32.792 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:32.792 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:33.054 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:33.054 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:33.054 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.054 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.055 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.055 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:33.055 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:33.055 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.055 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:33.055 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.055 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:33.055 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:33.055 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:33.055 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:33.055 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:33.055 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:33.055 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:33.316 rmmod nvme_tcp 00:11:33.316 rmmod nvme_fabrics 00:11:33.316 rmmod nvme_keyring 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 3285534 ']' 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 3285534 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3285534 ']' 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3285534 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:33.316 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3285534 00:11:33.316 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:33.316 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:33.316 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3285534' 00:11:33.316 killing process with pid 3285534 00:11:33.316 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3285534 00:11:33.316 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3285534 00:11:33.577 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:33.577 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:33.577 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:33.577 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:33.577 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:11:33.577 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:11:33.577 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:33.577 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.577 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:33.578 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.578 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.578 14:25:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.491 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.491 00:11:35.491 real 0m13.173s 00:11:35.491 user 0m15.768s 00:11:35.491 sys 0m6.389s 00:11:35.491 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:35.753 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:35.753 ************************************ 00:11:35.754 END TEST nvmf_referrals 00:11:35.754 ************************************ 00:11:35.754 14:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:35.754 14:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:35.754 14:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.754 14:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.754 ************************************ 00:11:35.754 START TEST nvmf_connect_disconnect 00:11:35.754 ************************************ 00:11:35.754 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:35.754 * Looking for test storage... 00:11:35.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.754 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:35.754 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:35.754 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:36.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.016 --rc genhtml_branch_coverage=1 00:11:36.016 --rc genhtml_function_coverage=1 00:11:36.016 --rc genhtml_legend=1 00:11:36.016 --rc geninfo_all_blocks=1 00:11:36.016 --rc geninfo_unexecuted_blocks=1 00:11:36.016 00:11:36.016 ' 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:36.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.016 --rc genhtml_branch_coverage=1 00:11:36.016 --rc genhtml_function_coverage=1 00:11:36.016 --rc genhtml_legend=1 00:11:36.016 --rc geninfo_all_blocks=1 00:11:36.016 --rc geninfo_unexecuted_blocks=1 00:11:36.016 00:11:36.016 ' 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:36.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.016 --rc genhtml_branch_coverage=1 00:11:36.016 --rc genhtml_function_coverage=1 00:11:36.016 --rc genhtml_legend=1 00:11:36.016 --rc geninfo_all_blocks=1 00:11:36.016 --rc geninfo_unexecuted_blocks=1 00:11:36.016 00:11:36.016 ' 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:36.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:36.016 --rc genhtml_branch_coverage=1 00:11:36.016 --rc genhtml_function_coverage=1 00:11:36.016 --rc genhtml_legend=1 00:11:36.016 --rc geninfo_all_blocks=1 00:11:36.016 --rc geninfo_unexecuted_blocks=1 00:11:36.016 00:11:36.016 ' 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:36.016 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:36.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:36.017 14:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:44.159 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:44.160 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:44.160 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:44.160 Found net devices under 0000:31:00.0: cvl_0_0 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:44.160 Found net devices under 0000:31:00.1: cvl_0_1 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:44.160 14:25:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:44.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:11:44.160 00:11:44.160 --- 10.0.0.2 ping statistics --- 00:11:44.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.160 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:11:44.160 00:11:44.160 --- 10.0.0.1 ping statistics --- 00:11:44.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.160 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=3290691 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 3290691 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3290691 ']' 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:44.160 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.160 [2024-10-14 14:25:24.112532] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:11:44.160 [2024-10-14 14:25:24.112601] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.160 [2024-10-14 14:25:24.187694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.160 [2024-10-14 14:25:24.230823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.160 [2024-10-14 14:25:24.230861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.160 [2024-10-14 14:25:24.230868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.160 [2024-10-14 14:25:24.230875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.160 [2024-10-14 14:25:24.230881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.160 [2024-10-14 14:25:24.232589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.161 [2024-10-14 14:25:24.232730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.161 [2024-10-14 14:25:24.232896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.161 [2024-10-14 14:25:24.232896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.422 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:44.422 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:44.422 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:44.422 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:44.422 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.422 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.422 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:44.422 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.422 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.422 [2024-10-14 14:25:24.968503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.422 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.422 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:44.422 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.422 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:44.422 [2024-10-14 14:25:25.035360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:44.422 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:48.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.730 rmmod nvme_tcp 00:12:02.730 rmmod nvme_fabrics 00:12:02.730 rmmod nvme_keyring 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 3290691 ']' 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 3290691 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3290691 ']' 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3290691 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.730 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3290691 00:12:02.990 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.990 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.990 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3290691' 00:12:02.990 killing process with pid 3290691 00:12:02.990 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3290691 00:12:02.990 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3290691 00:12:02.990 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:02.990 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:02.990 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:02.991 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:02.991 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:12:02.991 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:02.991 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:12:02.991 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:02.991 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:02.991 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.991 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.991 14:25:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.537 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.537 00:12:05.537 real 0m29.433s 00:12:05.537 user 1m19.479s 00:12:05.537 sys 0m7.093s 00:12:05.537 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.537 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.537 ************************************ 00:12:05.537 END TEST nvmf_connect_disconnect 00:12:05.537 ************************************ 00:12:05.538 14:25:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:05.538 14:25:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:05.538 14:25:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.538 14:25:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.538 ************************************ 00:12:05.538 START TEST nvmf_multitarget 00:12:05.538 ************************************ 00:12:05.538 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:05.538 * Looking for test storage... 00:12:05.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.538 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:05.538 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:05.538 14:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:05.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.538 --rc genhtml_branch_coverage=1 00:12:05.538 --rc genhtml_function_coverage=1 00:12:05.538 --rc genhtml_legend=1 00:12:05.538 --rc geninfo_all_blocks=1 00:12:05.538 --rc geninfo_unexecuted_blocks=1 00:12:05.538 00:12:05.538 ' 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:05.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.538 --rc genhtml_branch_coverage=1 00:12:05.538 --rc genhtml_function_coverage=1 00:12:05.538 --rc genhtml_legend=1 00:12:05.538 --rc geninfo_all_blocks=1 00:12:05.538 --rc geninfo_unexecuted_blocks=1 00:12:05.538 00:12:05.538 ' 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:05.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.538 --rc genhtml_branch_coverage=1 00:12:05.538 --rc genhtml_function_coverage=1 00:12:05.538 --rc genhtml_legend=1 00:12:05.538 --rc geninfo_all_blocks=1 00:12:05.538 --rc geninfo_unexecuted_blocks=1 00:12:05.538 00:12:05.538 ' 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:05.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.538 --rc genhtml_branch_coverage=1 00:12:05.538 --rc genhtml_function_coverage=1 00:12:05.538 --rc genhtml_legend=1 00:12:05.538 --rc geninfo_all_blocks=1 00:12:05.538 --rc geninfo_unexecuted_blocks=1 00:12:05.538 00:12:05.538 ' 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.538 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.539 14:25:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:13.688 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:13.688 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:13.688 Found net devices under 0000:31:00.0: cvl_0_0 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:13.688 Found net devices under 0000:31:00.1: cvl_0_1 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:13.688 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:13.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:12:13.689 00:12:13.689 --- 10.0.0.2 ping statistics --- 00:12:13.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.689 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:13.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:12:13.689 00:12:13.689 --- 10.0.0.1 ping statistics --- 00:12:13.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.689 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=3298873 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 3298873 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3298873 ']' 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:13.689 14:25:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:13.689 [2024-10-14 14:25:53.547634] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:12:13.689 [2024-10-14 14:25:53.547699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.689 [2024-10-14 14:25:53.621147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.689 [2024-10-14 14:25:53.664215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.689 [2024-10-14 14:25:53.664250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.689 [2024-10-14 14:25:53.664259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.689 [2024-10-14 14:25:53.664266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.689 [2024-10-14 14:25:53.664272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.689 [2024-10-14 14:25:53.665907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.689 [2024-10-14 14:25:53.666044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.689 [2024-10-14 14:25:53.666214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.689 [2024-10-14 14:25:53.666310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.689 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:13.689 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:13.689 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:13.689 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:13.689 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:13.689 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.689 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:13.689 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:13.689 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:13.950 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:13.950 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:13.950 "nvmf_tgt_1" 00:12:13.950 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:13.950 "nvmf_tgt_2" 00:12:14.210 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:14.210 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:14.210 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:14.210 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:14.210 true 00:12:14.210 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:14.471 true 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.471 rmmod nvme_tcp 00:12:14.471 rmmod nvme_fabrics 00:12:14.471 rmmod nvme_keyring 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 3298873 ']' 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 3298873 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3298873 ']' 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3298873 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.471 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3298873 00:12:14.732 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.732 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.732 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3298873' 00:12:14.732 killing process with pid 3298873 00:12:14.732 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3298873 00:12:14.732 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3298873 00:12:14.732 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:14.732 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:14.732 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:14.732 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:14.732 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:14.732 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:12:14.732 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:12:14.732 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.733 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:14.733 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.733 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.733 14:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:17.278 00:12:17.278 real 0m11.639s 00:12:17.278 user 0m9.786s 00:12:17.278 sys 0m6.010s 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:17.278 ************************************ 00:12:17.278 END TEST nvmf_multitarget 00:12:17.278 ************************************ 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.278 ************************************ 00:12:17.278 START TEST nvmf_rpc 00:12:17.278 ************************************ 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:17.278 * Looking for test storage... 00:12:17.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:17.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.278 --rc genhtml_branch_coverage=1 00:12:17.278 --rc genhtml_function_coverage=1 00:12:17.278 --rc genhtml_legend=1 00:12:17.278 --rc geninfo_all_blocks=1 00:12:17.278 --rc geninfo_unexecuted_blocks=1 00:12:17.278 00:12:17.278 ' 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:17.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.278 --rc genhtml_branch_coverage=1 00:12:17.278 --rc genhtml_function_coverage=1 00:12:17.278 --rc genhtml_legend=1 00:12:17.278 --rc geninfo_all_blocks=1 00:12:17.278 --rc geninfo_unexecuted_blocks=1 00:12:17.278 00:12:17.278 ' 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:17.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.278 --rc genhtml_branch_coverage=1 00:12:17.278 --rc genhtml_function_coverage=1 00:12:17.278 --rc genhtml_legend=1 00:12:17.278 --rc geninfo_all_blocks=1 00:12:17.278 --rc geninfo_unexecuted_blocks=1 00:12:17.278 00:12:17.278 ' 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:17.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.278 --rc genhtml_branch_coverage=1 00:12:17.278 --rc genhtml_function_coverage=1 00:12:17.278 --rc genhtml_legend=1 00:12:17.278 --rc geninfo_all_blocks=1 00:12:17.278 --rc geninfo_unexecuted_blocks=1 00:12:17.278 00:12:17.278 ' 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.278 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:17.279 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:25.425 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:25.425 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:25.425 Found net devices under 0000:31:00.0: cvl_0_0 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:25.425 Found net devices under 0000:31:00.1: cvl_0_1 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.425 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:25.426 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:25.426 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.426 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:12:25.426 00:12:25.426 --- 10.0.0.2 ping statistics --- 00:12:25.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.426 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:12:25.426 00:12:25.426 --- 10.0.0.1 ping statistics --- 00:12:25.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.426 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=3303390 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 3303390 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3303390 ']' 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.426 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.426 [2024-10-14 14:26:05.314219] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:12:25.426 [2024-10-14 14:26:05.314317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.426 [2024-10-14 14:26:05.390829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.426 [2024-10-14 14:26:05.434790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.426 [2024-10-14 14:26:05.434829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.426 [2024-10-14 14:26:05.434837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.426 [2024-10-14 14:26:05.434844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.426 [2024-10-14 14:26:05.434850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.426 [2024-10-14 14:26:05.436796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.426 [2024-10-14 14:26:05.436914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.426 [2024-10-14 14:26:05.437089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.426 [2024-10-14 14:26:05.437092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.426 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:25.426 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:25.426 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:25.426 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:25.426 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:25.687 "tick_rate": 2400000000, 00:12:25.687 "poll_groups": [ 00:12:25.687 { 00:12:25.687 "name": "nvmf_tgt_poll_group_000", 00:12:25.687 "admin_qpairs": 0, 00:12:25.687 "io_qpairs": 0, 00:12:25.687 "current_admin_qpairs": 0, 00:12:25.687 "current_io_qpairs": 0, 00:12:25.687 "pending_bdev_io": 0, 00:12:25.687 "completed_nvme_io": 0, 00:12:25.687 "transports": [] 00:12:25.687 }, 00:12:25.687 { 00:12:25.687 "name": "nvmf_tgt_poll_group_001", 00:12:25.687 "admin_qpairs": 0, 00:12:25.687 "io_qpairs": 0, 00:12:25.687 "current_admin_qpairs": 0, 00:12:25.687 "current_io_qpairs": 0, 00:12:25.687 "pending_bdev_io": 0, 00:12:25.687 "completed_nvme_io": 0, 00:12:25.687 "transports": [] 00:12:25.687 }, 00:12:25.687 { 00:12:25.687 "name": "nvmf_tgt_poll_group_002", 00:12:25.687 "admin_qpairs": 0, 00:12:25.687 "io_qpairs": 0, 00:12:25.687 "current_admin_qpairs": 0, 00:12:25.687 "current_io_qpairs": 0, 00:12:25.687 "pending_bdev_io": 0, 00:12:25.687 "completed_nvme_io": 0, 00:12:25.687 "transports": [] 00:12:25.687 }, 00:12:25.687 { 00:12:25.687 "name": "nvmf_tgt_poll_group_003", 00:12:25.687 "admin_qpairs": 0, 00:12:25.687 "io_qpairs": 0, 00:12:25.687 "current_admin_qpairs": 0, 00:12:25.687 "current_io_qpairs": 0, 00:12:25.687 "pending_bdev_io": 0, 00:12:25.687 "completed_nvme_io": 0, 00:12:25.687 "transports": [] 00:12:25.687 } 00:12:25.687 ] 00:12:25.687 }' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.687 [2024-10-14 14:26:06.280782] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:25.687 "tick_rate": 2400000000, 00:12:25.687 "poll_groups": [ 00:12:25.687 { 00:12:25.687 "name": "nvmf_tgt_poll_group_000", 00:12:25.687 "admin_qpairs": 0, 00:12:25.687 "io_qpairs": 0, 00:12:25.687 "current_admin_qpairs": 0, 00:12:25.687 "current_io_qpairs": 0, 00:12:25.687 "pending_bdev_io": 0, 00:12:25.687 "completed_nvme_io": 0, 00:12:25.687 "transports": [ 00:12:25.687 { 00:12:25.687 "trtype": "TCP" 00:12:25.687 } 00:12:25.687 ] 00:12:25.687 }, 00:12:25.687 { 00:12:25.687 "name": "nvmf_tgt_poll_group_001", 00:12:25.687 "admin_qpairs": 0, 00:12:25.687 "io_qpairs": 0, 00:12:25.687 "current_admin_qpairs": 0, 00:12:25.687 "current_io_qpairs": 0, 00:12:25.687 "pending_bdev_io": 0, 00:12:25.687 "completed_nvme_io": 0, 00:12:25.687 "transports": [ 00:12:25.687 { 00:12:25.687 "trtype": "TCP" 00:12:25.687 } 00:12:25.687 ] 00:12:25.687 }, 00:12:25.687 { 00:12:25.687 "name": "nvmf_tgt_poll_group_002", 00:12:25.687 "admin_qpairs": 0, 00:12:25.687 "io_qpairs": 0, 00:12:25.687 "current_admin_qpairs": 0, 00:12:25.687 "current_io_qpairs": 0, 00:12:25.687 "pending_bdev_io": 0, 00:12:25.687 "completed_nvme_io": 0, 00:12:25.687 "transports": [ 00:12:25.687 { 00:12:25.687 "trtype": "TCP" 00:12:25.687 } 00:12:25.687 ] 00:12:25.687 }, 00:12:25.687 { 00:12:25.687 "name": "nvmf_tgt_poll_group_003", 00:12:25.687 "admin_qpairs": 0, 00:12:25.687 "io_qpairs": 0, 00:12:25.687 "current_admin_qpairs": 0, 00:12:25.687 "current_io_qpairs": 0, 00:12:25.687 "pending_bdev_io": 0, 00:12:25.687 "completed_nvme_io": 0, 00:12:25.687 "transports": [ 00:12:25.687 { 00:12:25.687 "trtype": "TCP" 00:12:25.687 } 00:12:25.687 ] 00:12:25.687 } 00:12:25.687 ] 00:12:25.687 }' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:25.687 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.688 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.949 Malloc1 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.949 [2024-10-14 14:26:06.479359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:12:25.949 [2024-10-14 14:26:06.516161] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:25.949 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:25.949 could not add new controller: failed to write to nvme-fabrics device 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.949 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.879 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.879 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:27.879 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.879 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:27.879 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.794 [2024-10-14 14:26:10.293032] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:12:29.794 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:29.794 could not add new controller: failed to write to nvme-fabrics device 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.794 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.179 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.179 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:31.179 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.179 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:31.179 14:26:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:33.722 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:33.722 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:33.722 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.722 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:33.722 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.722 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:33.722 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.722 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.722 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:33.722 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:33.722 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.722 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:33.722 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.722 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:33.722 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.722 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.722 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.722 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.722 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:33.722 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.723 [2024-10-14 14:26:14.109030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.723 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.108 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.108 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:35.108 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.108 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:35.108 14:26:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:37.021 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:37.021 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:37.021 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.021 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:37.021 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.021 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:37.021 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.282 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.283 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.283 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.283 [2024-10-14 14:26:17.834587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.283 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.283 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:37.283 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.283 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.283 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.283 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.283 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.283 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.283 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.283 14:26:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.666 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.666 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:38.666 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.666 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:38.666 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:40.632 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:40.632 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:40.632 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.893 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:40.893 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.893 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:40.893 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.893 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.893 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:40.893 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:40.893 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.894 [2024-10-14 14:26:21.560255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.894 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.808 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.808 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:42.808 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.808 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:42.808 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.721 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.722 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.722 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.722 [2024-10-14 14:26:25.280375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.722 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.722 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.722 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.722 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.722 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.722 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.722 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.722 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.722 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.722 14:26:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.638 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.638 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.638 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.638 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:46.638 14:26:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:48.553 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:48.553 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:48.553 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.553 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:48.553 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.553 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:48.553 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.553 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.553 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:48.553 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:48.553 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.553 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:48.553 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.553 14:26:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.553 [2024-10-14 14:26:29.044659] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.553 14:26:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.939 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.939 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:49.940 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.940 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:49.940 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:51.853 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:51.853 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:51.853 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.853 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:51.853 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.853 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:51.853 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 [2024-10-14 14:26:32.779379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.120 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.121 [2024-10-14 14:26:32.847487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 [2024-10-14 14:26:32.915673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 [2024-10-14 14:26:32.983888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.443 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.443 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.443 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.443 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.443 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.443 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.444 [2024-10-14 14:26:33.048110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:52.444 "tick_rate": 2400000000, 00:12:52.444 "poll_groups": [ 00:12:52.444 { 00:12:52.444 "name": "nvmf_tgt_poll_group_000", 00:12:52.444 "admin_qpairs": 0, 00:12:52.444 "io_qpairs": 224, 00:12:52.444 "current_admin_qpairs": 0, 00:12:52.444 "current_io_qpairs": 0, 00:12:52.444 "pending_bdev_io": 0, 00:12:52.444 "completed_nvme_io": 224, 00:12:52.444 "transports": [ 00:12:52.444 { 00:12:52.444 "trtype": "TCP" 00:12:52.444 } 00:12:52.444 ] 00:12:52.444 }, 00:12:52.444 { 00:12:52.444 "name": "nvmf_tgt_poll_group_001", 00:12:52.444 "admin_qpairs": 1, 00:12:52.444 "io_qpairs": 223, 00:12:52.444 "current_admin_qpairs": 0, 00:12:52.444 "current_io_qpairs": 0, 00:12:52.444 "pending_bdev_io": 0, 00:12:52.444 "completed_nvme_io": 275, 00:12:52.444 "transports": [ 00:12:52.444 { 00:12:52.444 "trtype": "TCP" 00:12:52.444 } 00:12:52.444 ] 00:12:52.444 }, 00:12:52.444 { 00:12:52.444 "name": "nvmf_tgt_poll_group_002", 00:12:52.444 "admin_qpairs": 6, 00:12:52.444 "io_qpairs": 218, 00:12:52.444 "current_admin_qpairs": 0, 00:12:52.444 "current_io_qpairs": 0, 00:12:52.444 "pending_bdev_io": 0, 00:12:52.444 "completed_nvme_io": 311, 00:12:52.444 "transports": [ 00:12:52.444 { 00:12:52.444 "trtype": "TCP" 00:12:52.444 } 00:12:52.444 ] 00:12:52.444 }, 00:12:52.444 { 00:12:52.444 "name": "nvmf_tgt_poll_group_003", 00:12:52.444 "admin_qpairs": 0, 00:12:52.444 "io_qpairs": 224, 00:12:52.444 "current_admin_qpairs": 0, 00:12:52.444 "current_io_qpairs": 0, 00:12:52.444 "pending_bdev_io": 0, 00:12:52.444 "completed_nvme_io": 429, 00:12:52.444 "transports": [ 00:12:52.444 { 00:12:52.444 "trtype": "TCP" 00:12:52.444 } 00:12:52.444 ] 00:12:52.444 } 00:12:52.444 ] 00:12:52.444 }' 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:52.444 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:52.792 rmmod nvme_tcp 00:12:52.792 rmmod nvme_fabrics 00:12:52.792 rmmod nvme_keyring 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 3303390 ']' 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 3303390 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3303390 ']' 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3303390 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3303390 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3303390' 00:12:52.792 killing process with pid 3303390 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3303390 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3303390 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.792 14:26:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:55.418 00:12:55.418 real 0m38.024s 00:12:55.418 user 1m53.964s 00:12:55.418 sys 0m7.954s 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.418 ************************************ 00:12:55.418 END TEST nvmf_rpc 00:12:55.418 ************************************ 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.418 ************************************ 00:12:55.418 START TEST nvmf_invalid 00:12:55.418 ************************************ 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:55.418 * Looking for test storage... 00:12:55.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:55.418 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:55.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.419 --rc genhtml_branch_coverage=1 00:12:55.419 --rc genhtml_function_coverage=1 00:12:55.419 --rc genhtml_legend=1 00:12:55.419 --rc geninfo_all_blocks=1 00:12:55.419 --rc geninfo_unexecuted_blocks=1 00:12:55.419 00:12:55.419 ' 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:55.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.419 --rc genhtml_branch_coverage=1 00:12:55.419 --rc genhtml_function_coverage=1 00:12:55.419 --rc genhtml_legend=1 00:12:55.419 --rc geninfo_all_blocks=1 00:12:55.419 --rc geninfo_unexecuted_blocks=1 00:12:55.419 00:12:55.419 ' 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:55.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.419 --rc genhtml_branch_coverage=1 00:12:55.419 --rc genhtml_function_coverage=1 00:12:55.419 --rc genhtml_legend=1 00:12:55.419 --rc geninfo_all_blocks=1 00:12:55.419 --rc geninfo_unexecuted_blocks=1 00:12:55.419 00:12:55.419 ' 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:55.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.419 --rc genhtml_branch_coverage=1 00:12:55.419 --rc genhtml_function_coverage=1 00:12:55.419 --rc genhtml_legend=1 00:12:55.419 --rc geninfo_all_blocks=1 00:12:55.419 --rc geninfo_unexecuted_blocks=1 00:12:55.419 00:12:55.419 ' 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:55.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:55.419 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:03.567 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:03.567 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:03.567 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:03.568 Found net devices under 0000:31:00.0: cvl_0_0 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:03.568 Found net devices under 0000:31:00.1: cvl_0_1 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:03.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:13:03.568 00:13:03.568 --- 10.0.0.2 ping statistics --- 00:13:03.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.568 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:13:03.568 00:13:03.568 --- 10.0.0.1 ping statistics --- 00:13:03.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.568 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=3313283 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 3313283 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3313283 ']' 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:03.568 14:26:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.568 [2024-10-14 14:26:43.487467] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:13:03.568 [2024-10-14 14:26:43.487538] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.568 [2024-10-14 14:26:43.562779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.568 [2024-10-14 14:26:43.606885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.568 [2024-10-14 14:26:43.606920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.568 [2024-10-14 14:26:43.606929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.568 [2024-10-14 14:26:43.606936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.568 [2024-10-14 14:26:43.606942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.568 [2024-10-14 14:26:43.608856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.568 [2024-10-14 14:26:43.608974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.568 [2024-10-14 14:26:43.609174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.568 [2024-10-14 14:26:43.609175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.830 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:03.830 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:03.830 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:03.830 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:03.830 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.830 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.830 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:03.830 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30316 00:13:03.830 [2024-10-14 14:26:44.493131] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:03.830 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:03.830 { 00:13:03.830 "nqn": "nqn.2016-06.io.spdk:cnode30316", 00:13:03.830 "tgt_name": "foobar", 00:13:03.830 "method": "nvmf_create_subsystem", 00:13:03.830 "req_id": 1 00:13:03.830 } 00:13:03.830 Got JSON-RPC error response 00:13:03.830 response: 00:13:03.830 { 00:13:03.830 "code": -32603, 00:13:03.830 "message": "Unable to find target foobar" 00:13:03.830 }' 00:13:03.830 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:03.830 { 00:13:03.830 "nqn": "nqn.2016-06.io.spdk:cnode30316", 00:13:03.830 "tgt_name": "foobar", 00:13:03.830 "method": "nvmf_create_subsystem", 00:13:03.830 "req_id": 1 00:13:03.830 } 00:13:03.830 Got JSON-RPC error response 00:13:03.830 response: 00:13:03.830 { 00:13:03.830 "code": -32603, 00:13:03.830 "message": "Unable to find target foobar" 00:13:03.830 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:03.830 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:03.830 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18119 00:13:04.091 [2024-10-14 14:26:44.681787] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18119: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:04.091 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:04.091 { 00:13:04.091 "nqn": "nqn.2016-06.io.spdk:cnode18119", 00:13:04.091 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:04.091 "method": "nvmf_create_subsystem", 00:13:04.091 "req_id": 1 00:13:04.091 } 00:13:04.091 Got JSON-RPC error response 00:13:04.091 response: 00:13:04.091 { 00:13:04.091 "code": -32602, 00:13:04.091 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:04.091 }' 00:13:04.091 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:04.091 { 00:13:04.091 "nqn": "nqn.2016-06.io.spdk:cnode18119", 00:13:04.091 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:04.091 "method": "nvmf_create_subsystem", 00:13:04.091 "req_id": 1 00:13:04.091 } 00:13:04.091 Got JSON-RPC error response 00:13:04.091 response: 00:13:04.091 { 00:13:04.091 "code": -32602, 00:13:04.091 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:04.091 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:04.091 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:04.091 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode692 00:13:04.352 [2024-10-14 14:26:44.870348] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode692: invalid model number 'SPDK_Controller' 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:04.352 { 00:13:04.352 "nqn": "nqn.2016-06.io.spdk:cnode692", 00:13:04.352 "model_number": "SPDK_Controller\u001f", 00:13:04.352 "method": "nvmf_create_subsystem", 00:13:04.352 "req_id": 1 00:13:04.352 } 00:13:04.352 Got JSON-RPC error response 00:13:04.352 response: 00:13:04.352 { 00:13:04.352 "code": -32602, 00:13:04.352 "message": "Invalid MN SPDK_Controller\u001f" 00:13:04.352 }' 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:04.352 { 00:13:04.352 "nqn": "nqn.2016-06.io.spdk:cnode692", 00:13:04.352 "model_number": "SPDK_Controller\u001f", 00:13:04.352 "method": "nvmf_create_subsystem", 00:13:04.352 "req_id": 1 00:13:04.352 } 00:13:04.352 Got JSON-RPC error response 00:13:04.352 response: 00:13:04.352 { 00:13:04.352 "code": -32602, 00:13:04.352 "message": "Invalid MN SPDK_Controller\u001f" 00:13:04.352 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.352 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:04.353 14:26:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ g == \- ]] 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'g@Nlv?4R!70tFC8te>o=D' 00:13:04.353 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'g@Nlv?4R!70tFC8te>o=D' nqn.2016-06.io.spdk:cnode9625 00:13:04.615 [2024-10-14 14:26:45.227529] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9625: invalid serial number 'g@Nlv?4R!70tFC8te>o=D' 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:04.615 { 00:13:04.615 "nqn": "nqn.2016-06.io.spdk:cnode9625", 00:13:04.615 "serial_number": "g@Nlv?4R!70tFC8te>o=D", 00:13:04.615 "method": "nvmf_create_subsystem", 00:13:04.615 "req_id": 1 00:13:04.615 } 00:13:04.615 Got JSON-RPC error response 00:13:04.615 response: 00:13:04.615 { 00:13:04.615 "code": -32602, 00:13:04.615 "message": "Invalid SN g@Nlv?4R!70tFC8te>o=D" 00:13:04.615 }' 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:04.615 { 00:13:04.615 "nqn": "nqn.2016-06.io.spdk:cnode9625", 00:13:04.615 "serial_number": "g@Nlv?4R!70tFC8te>o=D", 00:13:04.615 "method": "nvmf_create_subsystem", 00:13:04.615 "req_id": 1 00:13:04.615 } 00:13:04.615 Got JSON-RPC error response 00:13:04.615 response: 00:13:04.615 { 00:13:04.615 "code": -32602, 00:13:04.615 "message": "Invalid SN g@Nlv?4R!70tFC8te>o=D" 00:13:04.615 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.615 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.877 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ d == \- ]] 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'd:yY3y9hP).;x Wh#((}i'\''g~J2uPzO1ek\7Eh%q&' 00:13:04.878 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'd:yY3y9hP).;x Wh#((}i'\''g~J2uPzO1ek\7Eh%q&' nqn.2016-06.io.spdk:cnode32165 00:13:05.139 [2024-10-14 14:26:45.737160] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32165: invalid model number 'd:yY3y9hP).;x Wh#((}i'g~J2uPzO1ek\7Eh%q&' 00:13:05.139 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:05.139 { 00:13:05.139 "nqn": "nqn.2016-06.io.spdk:cnode32165", 00:13:05.139 "model_number": "d:yY3y9hP).;x Wh#((}i'\''g~J2uPzO1ek\\\u007f7Eh%q&", 00:13:05.139 "method": "nvmf_create_subsystem", 00:13:05.139 "req_id": 1 00:13:05.139 } 00:13:05.139 Got JSON-RPC error response 00:13:05.139 response: 00:13:05.139 { 00:13:05.139 "code": -32602, 00:13:05.139 "message": "Invalid MN d:yY3y9hP).;x Wh#((}i'\''g~J2uPzO1ek\\\u007f7Eh%q&" 00:13:05.139 }' 00:13:05.139 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:05.139 { 00:13:05.139 "nqn": "nqn.2016-06.io.spdk:cnode32165", 00:13:05.139 "model_number": "d:yY3y9hP).;x Wh#((}i'g~J2uPzO1ek\\\u007f7Eh%q&", 00:13:05.139 "method": "nvmf_create_subsystem", 00:13:05.139 "req_id": 1 00:13:05.139 } 00:13:05.139 Got JSON-RPC error response 00:13:05.139 response: 00:13:05.139 { 00:13:05.139 "code": -32602, 00:13:05.139 "message": "Invalid MN d:yY3y9hP).;x Wh#((}i'g~J2uPzO1ek\\\u007f7Eh%q&" 00:13:05.139 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:05.139 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:05.400 [2024-10-14 14:26:45.925850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.400 14:26:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:05.661 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:05.661 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:05.661 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:05.661 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:05.661 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:05.661 [2024-10-14 14:26:46.311021] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:05.661 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:05.661 { 00:13:05.661 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:05.661 "listen_address": { 00:13:05.661 "trtype": "tcp", 00:13:05.661 "traddr": "", 00:13:05.661 "trsvcid": "4421" 00:13:05.661 }, 00:13:05.661 "method": "nvmf_subsystem_remove_listener", 00:13:05.661 "req_id": 1 00:13:05.661 } 00:13:05.661 Got JSON-RPC error response 00:13:05.661 response: 00:13:05.661 { 00:13:05.661 "code": -32602, 00:13:05.661 "message": "Invalid parameters" 00:13:05.661 }' 00:13:05.661 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:05.661 { 00:13:05.661 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:05.661 "listen_address": { 00:13:05.661 "trtype": "tcp", 00:13:05.661 "traddr": "", 00:13:05.661 "trsvcid": "4421" 00:13:05.661 }, 00:13:05.661 "method": "nvmf_subsystem_remove_listener", 00:13:05.661 "req_id": 1 00:13:05.661 } 00:13:05.661 Got JSON-RPC error response 00:13:05.661 response: 00:13:05.661 { 00:13:05.661 "code": -32602, 00:13:05.661 "message": "Invalid parameters" 00:13:05.661 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:05.661 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15717 -i 0 00:13:05.921 [2024-10-14 14:26:46.499569] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15717: invalid cntlid range [0-65519] 00:13:05.922 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:05.922 { 00:13:05.922 "nqn": "nqn.2016-06.io.spdk:cnode15717", 00:13:05.922 "min_cntlid": 0, 00:13:05.922 "method": "nvmf_create_subsystem", 00:13:05.922 "req_id": 1 00:13:05.922 } 00:13:05.922 Got JSON-RPC error response 00:13:05.922 response: 00:13:05.922 { 00:13:05.922 "code": -32602, 00:13:05.922 "message": "Invalid cntlid range [0-65519]" 00:13:05.922 }' 00:13:05.922 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:05.922 { 00:13:05.922 "nqn": "nqn.2016-06.io.spdk:cnode15717", 00:13:05.922 "min_cntlid": 0, 00:13:05.922 "method": "nvmf_create_subsystem", 00:13:05.922 "req_id": 1 00:13:05.922 } 00:13:05.922 Got JSON-RPC error response 00:13:05.922 response: 00:13:05.922 { 00:13:05.922 "code": -32602, 00:13:05.922 "message": "Invalid cntlid range [0-65519]" 00:13:05.922 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:05.922 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode709 -i 65520 00:13:06.182 [2024-10-14 14:26:46.688164] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode709: invalid cntlid range [65520-65519] 00:13:06.182 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:06.182 { 00:13:06.182 "nqn": "nqn.2016-06.io.spdk:cnode709", 00:13:06.182 "min_cntlid": 65520, 00:13:06.182 "method": "nvmf_create_subsystem", 00:13:06.182 "req_id": 1 00:13:06.182 } 00:13:06.182 Got JSON-RPC error response 00:13:06.182 response: 00:13:06.182 { 00:13:06.182 "code": -32602, 00:13:06.182 "message": "Invalid cntlid range [65520-65519]" 00:13:06.182 }' 00:13:06.182 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:06.182 { 00:13:06.182 "nqn": "nqn.2016-06.io.spdk:cnode709", 00:13:06.182 "min_cntlid": 65520, 00:13:06.182 "method": "nvmf_create_subsystem", 00:13:06.182 "req_id": 1 00:13:06.182 } 00:13:06.182 Got JSON-RPC error response 00:13:06.182 response: 00:13:06.182 { 00:13:06.182 "code": -32602, 00:13:06.182 "message": "Invalid cntlid range [65520-65519]" 00:13:06.182 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.182 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7896 -I 0 00:13:06.182 [2024-10-14 14:26:46.868755] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7896: invalid cntlid range [1-0] 00:13:06.182 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:06.182 { 00:13:06.182 "nqn": "nqn.2016-06.io.spdk:cnode7896", 00:13:06.182 "max_cntlid": 0, 00:13:06.182 "method": "nvmf_create_subsystem", 00:13:06.182 "req_id": 1 00:13:06.182 } 00:13:06.182 Got JSON-RPC error response 00:13:06.182 response: 00:13:06.182 { 00:13:06.182 "code": -32602, 00:13:06.183 "message": "Invalid cntlid range [1-0]" 00:13:06.183 }' 00:13:06.183 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:06.183 { 00:13:06.183 "nqn": "nqn.2016-06.io.spdk:cnode7896", 00:13:06.183 "max_cntlid": 0, 00:13:06.183 "method": "nvmf_create_subsystem", 00:13:06.183 "req_id": 1 00:13:06.183 } 00:13:06.183 Got JSON-RPC error response 00:13:06.183 response: 00:13:06.183 { 00:13:06.183 "code": -32602, 00:13:06.183 "message": "Invalid cntlid range [1-0]" 00:13:06.183 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.183 14:26:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7367 -I 65520 00:13:06.443 [2024-10-14 14:26:47.057359] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7367: invalid cntlid range [1-65520] 00:13:06.443 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:06.443 { 00:13:06.443 "nqn": "nqn.2016-06.io.spdk:cnode7367", 00:13:06.443 "max_cntlid": 65520, 00:13:06.443 "method": "nvmf_create_subsystem", 00:13:06.443 "req_id": 1 00:13:06.443 } 00:13:06.443 Got JSON-RPC error response 00:13:06.443 response: 00:13:06.443 { 00:13:06.443 "code": -32602, 00:13:06.443 "message": "Invalid cntlid range [1-65520]" 00:13:06.443 }' 00:13:06.443 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:06.443 { 00:13:06.443 "nqn": "nqn.2016-06.io.spdk:cnode7367", 00:13:06.443 "max_cntlid": 65520, 00:13:06.443 "method": "nvmf_create_subsystem", 00:13:06.443 "req_id": 1 00:13:06.443 } 00:13:06.443 Got JSON-RPC error response 00:13:06.443 response: 00:13:06.443 { 00:13:06.443 "code": -32602, 00:13:06.443 "message": "Invalid cntlid range [1-65520]" 00:13:06.443 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.443 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22297 -i 6 -I 5 00:13:06.704 [2024-10-14 14:26:47.245941] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22297: invalid cntlid range [6-5] 00:13:06.704 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:06.704 { 00:13:06.704 "nqn": "nqn.2016-06.io.spdk:cnode22297", 00:13:06.704 "min_cntlid": 6, 00:13:06.704 "max_cntlid": 5, 00:13:06.704 "method": "nvmf_create_subsystem", 00:13:06.704 "req_id": 1 00:13:06.704 } 00:13:06.704 Got JSON-RPC error response 00:13:06.704 response: 00:13:06.704 { 00:13:06.704 "code": -32602, 00:13:06.704 "message": "Invalid cntlid range [6-5]" 00:13:06.704 }' 00:13:06.704 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:06.704 { 00:13:06.704 "nqn": "nqn.2016-06.io.spdk:cnode22297", 00:13:06.704 "min_cntlid": 6, 00:13:06.704 "max_cntlid": 5, 00:13:06.704 "method": "nvmf_create_subsystem", 00:13:06.704 "req_id": 1 00:13:06.704 } 00:13:06.704 Got JSON-RPC error response 00:13:06.704 response: 00:13:06.704 { 00:13:06.704 "code": -32602, 00:13:06.704 "message": "Invalid cntlid range [6-5]" 00:13:06.704 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:06.704 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:06.704 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:06.704 { 00:13:06.704 "name": "foobar", 00:13:06.704 "method": "nvmf_delete_target", 00:13:06.704 "req_id": 1 00:13:06.704 } 00:13:06.704 Got JSON-RPC error response 00:13:06.704 response: 00:13:06.704 { 00:13:06.704 "code": -32602, 00:13:06.704 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:06.704 }' 00:13:06.704 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:06.704 { 00:13:06.704 "name": "foobar", 00:13:06.704 "method": "nvmf_delete_target", 00:13:06.704 "req_id": 1 00:13:06.704 } 00:13:06.704 Got JSON-RPC error response 00:13:06.704 response: 00:13:06.704 { 00:13:06.704 "code": -32602, 00:13:06.704 "message": "The specified target doesn't exist, cannot delete it." 00:13:06.704 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:06.704 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:06.704 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:06.704 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:06.704 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:06.704 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:06.704 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:06.704 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:06.704 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:06.704 rmmod nvme_tcp 00:13:06.704 rmmod nvme_fabrics 00:13:06.704 rmmod nvme_keyring 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 3313283 ']' 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 3313283 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 3313283 ']' 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 3313283 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3313283 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3313283' 00:13:06.965 killing process with pid 3313283 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 3313283 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 3313283 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.965 14:26:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:09.510 00:13:09.510 real 0m14.095s 00:13:09.510 user 0m20.683s 00:13:09.510 sys 0m6.695s 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.510 ************************************ 00:13:09.510 END TEST nvmf_invalid 00:13:09.510 ************************************ 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:09.510 ************************************ 00:13:09.510 START TEST nvmf_connect_stress 00:13:09.510 ************************************ 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:09.510 * Looking for test storage... 00:13:09.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:09.510 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:09.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.511 --rc genhtml_branch_coverage=1 00:13:09.511 --rc genhtml_function_coverage=1 00:13:09.511 --rc genhtml_legend=1 00:13:09.511 --rc geninfo_all_blocks=1 00:13:09.511 --rc geninfo_unexecuted_blocks=1 00:13:09.511 00:13:09.511 ' 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:09.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.511 --rc genhtml_branch_coverage=1 00:13:09.511 --rc genhtml_function_coverage=1 00:13:09.511 --rc genhtml_legend=1 00:13:09.511 --rc geninfo_all_blocks=1 00:13:09.511 --rc geninfo_unexecuted_blocks=1 00:13:09.511 00:13:09.511 ' 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:09.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.511 --rc genhtml_branch_coverage=1 00:13:09.511 --rc genhtml_function_coverage=1 00:13:09.511 --rc genhtml_legend=1 00:13:09.511 --rc geninfo_all_blocks=1 00:13:09.511 --rc geninfo_unexecuted_blocks=1 00:13:09.511 00:13:09.511 ' 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:09.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.511 --rc genhtml_branch_coverage=1 00:13:09.511 --rc genhtml_function_coverage=1 00:13:09.511 --rc genhtml_legend=1 00:13:09.511 --rc geninfo_all_blocks=1 00:13:09.511 --rc geninfo_unexecuted_blocks=1 00:13:09.511 00:13:09.511 ' 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.511 14:26:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:09.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:09.511 14:26:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.658 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:17.659 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:17.659 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:17.659 Found net devices under 0000:31:00.0: cvl_0_0 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:17.659 Found net devices under 0000:31:00.1: cvl_0_1 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:17.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:13:17.659 00:13:17.659 --- 10.0.0.2 ping statistics --- 00:13:17.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.659 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:13:17.659 00:13:17.659 --- 10.0.0.1 ping statistics --- 00:13:17.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.659 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=3318534 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 3318534 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3318534 ']' 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:17.659 14:26:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.660 [2024-10-14 14:26:57.530606] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:13:17.660 [2024-10-14 14:26:57.530669] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.660 [2024-10-14 14:26:57.620429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.660 [2024-10-14 14:26:57.671322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.660 [2024-10-14 14:26:57.671376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.660 [2024-10-14 14:26:57.671385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.660 [2024-10-14 14:26:57.671392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.660 [2024-10-14 14:26:57.671398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.660 [2024-10-14 14:26:57.673486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.660 [2024-10-14 14:26:57.673649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.660 [2024-10-14 14:26:57.673649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.660 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.660 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:17.660 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:17.660 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:17.660 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.660 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.660 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:17.660 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.660 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.921 [2024-10-14 14:26:58.388887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.921 [2024-10-14 14:26:58.413300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.921 NULL1 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3318835 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.921 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.922 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.922 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.922 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.922 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.922 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.922 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.922 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.922 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.922 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.922 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.922 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:17.922 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.922 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.922 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.183 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.183 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:18.183 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.183 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.183 14:26:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.443 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.443 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:18.443 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.443 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.443 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.013 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.013 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:19.013 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.013 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.013 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.275 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.275 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:19.275 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.275 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.275 14:26:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.535 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.535 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:19.535 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.535 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.535 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.795 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.795 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:19.795 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.795 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.795 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.367 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.367 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:20.367 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.367 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.367 14:27:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.628 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.628 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:20.628 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.628 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.628 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.888 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.888 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:20.888 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.888 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.888 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.149 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.149 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:21.149 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.149 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.149 14:27:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.409 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.409 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:21.409 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.409 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.409 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.981 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.981 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:21.981 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.981 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.981 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.241 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.241 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:22.241 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.241 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.241 14:27:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.501 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.501 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:22.501 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.501 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.501 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.762 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.762 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:22.762 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.762 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.762 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.022 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.022 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:23.022 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.022 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.022 14:27:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.593 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.593 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:23.593 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.593 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.593 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.854 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.854 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:23.854 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.854 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.854 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.114 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.114 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:24.114 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.115 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.115 14:27:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.375 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.375 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:24.375 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.375 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.375 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.637 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.637 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:24.637 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.637 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.637 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.208 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.208 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:25.208 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.208 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.208 14:27:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.469 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.469 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:25.469 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.469 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.469 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.731 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.731 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:25.731 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.731 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.731 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.992 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.992 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:25.992 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.992 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.992 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.565 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.565 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:26.565 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.565 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.565 14:27:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.825 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.825 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:26.825 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.825 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.825 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.085 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.085 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:27.085 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.085 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.085 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.346 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.346 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:27.346 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.346 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.346 14:27:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.607 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.607 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:27.607 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.607 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.607 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.177 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:28.177 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.177 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:28.177 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.177 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.177 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.437 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.437 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3318835 00:13:28.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3318835) - No such process 00:13:28.437 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3318835 00:13:28.437 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:28.437 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:28.437 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:28.437 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:28.437 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:28.437 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:28.437 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:28.437 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:28.437 14:27:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:28.437 rmmod nvme_tcp 00:13:28.437 rmmod nvme_fabrics 00:13:28.437 rmmod nvme_keyring 00:13:28.437 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:28.437 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:28.437 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:28.437 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 3318534 ']' 00:13:28.437 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 3318534 00:13:28.438 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3318534 ']' 00:13:28.438 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3318534 00:13:28.438 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:28.438 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:28.438 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3318534 00:13:28.438 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:28.438 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:28.438 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3318534' 00:13:28.438 killing process with pid 3318534 00:13:28.438 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3318534 00:13:28.438 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3318534 00:13:28.699 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:28.699 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:28.699 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:28.699 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:28.699 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:13:28.699 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:28.699 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:13:28.699 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:28.699 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:28.699 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.699 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.699 14:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.614 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:30.614 00:13:30.614 real 0m21.464s 00:13:30.614 user 0m43.210s 00:13:30.614 sys 0m9.141s 00:13:30.614 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.614 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.614 ************************************ 00:13:30.614 END TEST nvmf_connect_stress 00:13:30.614 ************************************ 00:13:30.614 14:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:30.614 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:30.614 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:30.614 14:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.877 ************************************ 00:13:30.877 START TEST nvmf_fused_ordering 00:13:30.877 ************************************ 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:30.877 * Looking for test storage... 00:13:30.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:30.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.877 --rc genhtml_branch_coverage=1 00:13:30.877 --rc genhtml_function_coverage=1 00:13:30.877 --rc genhtml_legend=1 00:13:30.877 --rc geninfo_all_blocks=1 00:13:30.877 --rc geninfo_unexecuted_blocks=1 00:13:30.877 00:13:30.877 ' 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:30.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.877 --rc genhtml_branch_coverage=1 00:13:30.877 --rc genhtml_function_coverage=1 00:13:30.877 --rc genhtml_legend=1 00:13:30.877 --rc geninfo_all_blocks=1 00:13:30.877 --rc geninfo_unexecuted_blocks=1 00:13:30.877 00:13:30.877 ' 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:30.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.877 --rc genhtml_branch_coverage=1 00:13:30.877 --rc genhtml_function_coverage=1 00:13:30.877 --rc genhtml_legend=1 00:13:30.877 --rc geninfo_all_blocks=1 00:13:30.877 --rc geninfo_unexecuted_blocks=1 00:13:30.877 00:13:30.877 ' 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:30.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:30.877 --rc genhtml_branch_coverage=1 00:13:30.877 --rc genhtml_function_coverage=1 00:13:30.877 --rc genhtml_legend=1 00:13:30.877 --rc geninfo_all_blocks=1 00:13:30.877 --rc geninfo_unexecuted_blocks=1 00:13:30.877 00:13:30.877 ' 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.877 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:30.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:30.878 14:27:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:39.022 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:39.022 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:39.022 Found net devices under 0000:31:00.0: cvl_0_0 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:39.022 Found net devices under 0000:31:00.1: cvl_0_1 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:39.022 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.023 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.023 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.023 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.023 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:39.023 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.023 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.023 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.023 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:39.023 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:39.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:13:39.023 00:13:39.023 --- 10.0.0.2 ping statistics --- 00:13:39.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.023 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:13:39.023 14:27:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:13:39.023 00:13:39.023 --- 10.0.0.1 ping statistics --- 00:13:39.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.023 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=3325825 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 3325825 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3325825 ']' 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:39.023 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.023 [2024-10-14 14:27:19.114104] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:13:39.023 [2024-10-14 14:27:19.114159] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.023 [2024-10-14 14:27:19.201526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.023 [2024-10-14 14:27:19.251260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.023 [2024-10-14 14:27:19.251304] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.023 [2024-10-14 14:27:19.251313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.023 [2024-10-14 14:27:19.251320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.023 [2024-10-14 14:27:19.251327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.023 [2024-10-14 14:27:19.252184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.285 [2024-10-14 14:27:19.972661] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.285 [2024-10-14 14:27:19.988927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.285 14:27:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.285 NULL1 00:13:39.285 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.285 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:39.285 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.285 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.285 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.285 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:39.285 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.285 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:39.546 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.546 14:27:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:39.546 [2024-10-14 14:27:20.050269] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:13:39.546 [2024-10-14 14:27:20.050357] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3325862 ] 00:13:39.809 Attached to nqn.2016-06.io.spdk:cnode1 00:13:39.809 Namespace ID: 1 size: 1GB 00:13:39.809 fused_ordering(0) 00:13:39.809 fused_ordering(1) 00:13:39.809 fused_ordering(2) 00:13:39.809 fused_ordering(3) 00:13:39.809 fused_ordering(4) 00:13:39.809 fused_ordering(5) 00:13:39.809 fused_ordering(6) 00:13:39.809 fused_ordering(7) 00:13:39.809 fused_ordering(8) 00:13:39.809 fused_ordering(9) 00:13:39.809 fused_ordering(10) 00:13:39.809 fused_ordering(11) 00:13:39.809 fused_ordering(12) 00:13:39.809 fused_ordering(13) 00:13:39.809 fused_ordering(14) 00:13:39.809 fused_ordering(15) 00:13:39.809 fused_ordering(16) 00:13:39.809 fused_ordering(17) 00:13:39.809 fused_ordering(18) 00:13:39.809 fused_ordering(19) 00:13:39.809 fused_ordering(20) 00:13:39.809 fused_ordering(21) 00:13:39.809 fused_ordering(22) 00:13:39.809 fused_ordering(23) 00:13:39.809 fused_ordering(24) 00:13:39.809 fused_ordering(25) 00:13:39.809 fused_ordering(26) 00:13:39.809 fused_ordering(27) 00:13:39.809 fused_ordering(28) 00:13:39.809 fused_ordering(29) 00:13:39.809 fused_ordering(30) 00:13:39.809 fused_ordering(31) 00:13:39.809 fused_ordering(32) 00:13:39.809 fused_ordering(33) 00:13:39.809 fused_ordering(34) 00:13:39.809 fused_ordering(35) 00:13:39.809 fused_ordering(36) 00:13:39.809 fused_ordering(37) 00:13:39.809 fused_ordering(38) 00:13:39.809 fused_ordering(39) 00:13:39.809 fused_ordering(40) 00:13:39.809 fused_ordering(41) 00:13:39.809 fused_ordering(42) 00:13:39.809 fused_ordering(43) 00:13:39.809 fused_ordering(44) 00:13:39.809 fused_ordering(45) 00:13:39.809 fused_ordering(46) 00:13:39.809 fused_ordering(47) 00:13:39.809 fused_ordering(48) 00:13:39.809 fused_ordering(49) 00:13:39.809 fused_ordering(50) 00:13:39.809 fused_ordering(51) 00:13:39.809 fused_ordering(52) 00:13:39.809 fused_ordering(53) 00:13:39.809 fused_ordering(54) 00:13:39.809 fused_ordering(55) 00:13:39.809 fused_ordering(56) 00:13:39.809 fused_ordering(57) 00:13:39.809 fused_ordering(58) 00:13:39.809 fused_ordering(59) 00:13:39.809 fused_ordering(60) 00:13:39.809 fused_ordering(61) 00:13:39.809 fused_ordering(62) 00:13:39.809 fused_ordering(63) 00:13:39.809 fused_ordering(64) 00:13:39.809 fused_ordering(65) 00:13:39.809 fused_ordering(66) 00:13:39.809 fused_ordering(67) 00:13:39.809 fused_ordering(68) 00:13:39.809 fused_ordering(69) 00:13:39.809 fused_ordering(70) 00:13:39.809 fused_ordering(71) 00:13:39.809 fused_ordering(72) 00:13:39.809 fused_ordering(73) 00:13:39.809 fused_ordering(74) 00:13:39.809 fused_ordering(75) 00:13:39.809 fused_ordering(76) 00:13:39.809 fused_ordering(77) 00:13:39.809 fused_ordering(78) 00:13:39.809 fused_ordering(79) 00:13:39.809 fused_ordering(80) 00:13:39.809 fused_ordering(81) 00:13:39.809 fused_ordering(82) 00:13:39.809 fused_ordering(83) 00:13:39.809 fused_ordering(84) 00:13:39.809 fused_ordering(85) 00:13:39.809 fused_ordering(86) 00:13:39.809 fused_ordering(87) 00:13:39.809 fused_ordering(88) 00:13:39.809 fused_ordering(89) 00:13:39.809 fused_ordering(90) 00:13:39.809 fused_ordering(91) 00:13:39.809 fused_ordering(92) 00:13:39.809 fused_ordering(93) 00:13:39.809 fused_ordering(94) 00:13:39.809 fused_ordering(95) 00:13:39.809 fused_ordering(96) 00:13:39.809 fused_ordering(97) 00:13:39.809 fused_ordering(98) 00:13:39.809 fused_ordering(99) 00:13:39.809 fused_ordering(100) 00:13:39.809 fused_ordering(101) 00:13:39.809 fused_ordering(102) 00:13:39.809 fused_ordering(103) 00:13:39.809 fused_ordering(104) 00:13:39.809 fused_ordering(105) 00:13:39.809 fused_ordering(106) 00:13:39.809 fused_ordering(107) 00:13:39.809 fused_ordering(108) 00:13:39.809 fused_ordering(109) 00:13:39.809 fused_ordering(110) 00:13:39.809 fused_ordering(111) 00:13:39.809 fused_ordering(112) 00:13:39.809 fused_ordering(113) 00:13:39.809 fused_ordering(114) 00:13:39.809 fused_ordering(115) 00:13:39.809 fused_ordering(116) 00:13:39.809 fused_ordering(117) 00:13:39.809 fused_ordering(118) 00:13:39.809 fused_ordering(119) 00:13:39.809 fused_ordering(120) 00:13:39.809 fused_ordering(121) 00:13:39.809 fused_ordering(122) 00:13:39.809 fused_ordering(123) 00:13:39.809 fused_ordering(124) 00:13:39.809 fused_ordering(125) 00:13:39.809 fused_ordering(126) 00:13:39.809 fused_ordering(127) 00:13:39.809 fused_ordering(128) 00:13:39.809 fused_ordering(129) 00:13:39.809 fused_ordering(130) 00:13:39.809 fused_ordering(131) 00:13:39.809 fused_ordering(132) 00:13:39.809 fused_ordering(133) 00:13:39.809 fused_ordering(134) 00:13:39.809 fused_ordering(135) 00:13:39.809 fused_ordering(136) 00:13:39.809 fused_ordering(137) 00:13:39.809 fused_ordering(138) 00:13:39.809 fused_ordering(139) 00:13:39.809 fused_ordering(140) 00:13:39.809 fused_ordering(141) 00:13:39.809 fused_ordering(142) 00:13:39.809 fused_ordering(143) 00:13:39.809 fused_ordering(144) 00:13:39.809 fused_ordering(145) 00:13:39.809 fused_ordering(146) 00:13:39.809 fused_ordering(147) 00:13:39.809 fused_ordering(148) 00:13:39.809 fused_ordering(149) 00:13:39.809 fused_ordering(150) 00:13:39.809 fused_ordering(151) 00:13:39.809 fused_ordering(152) 00:13:39.809 fused_ordering(153) 00:13:39.809 fused_ordering(154) 00:13:39.809 fused_ordering(155) 00:13:39.809 fused_ordering(156) 00:13:39.809 fused_ordering(157) 00:13:39.809 fused_ordering(158) 00:13:39.809 fused_ordering(159) 00:13:39.809 fused_ordering(160) 00:13:39.809 fused_ordering(161) 00:13:39.809 fused_ordering(162) 00:13:39.809 fused_ordering(163) 00:13:39.809 fused_ordering(164) 00:13:39.809 fused_ordering(165) 00:13:39.809 fused_ordering(166) 00:13:39.809 fused_ordering(167) 00:13:39.809 fused_ordering(168) 00:13:39.809 fused_ordering(169) 00:13:39.809 fused_ordering(170) 00:13:39.809 fused_ordering(171) 00:13:39.809 fused_ordering(172) 00:13:39.809 fused_ordering(173) 00:13:39.809 fused_ordering(174) 00:13:39.809 fused_ordering(175) 00:13:39.809 fused_ordering(176) 00:13:39.810 fused_ordering(177) 00:13:39.810 fused_ordering(178) 00:13:39.810 fused_ordering(179) 00:13:39.810 fused_ordering(180) 00:13:39.810 fused_ordering(181) 00:13:39.810 fused_ordering(182) 00:13:39.810 fused_ordering(183) 00:13:39.810 fused_ordering(184) 00:13:39.810 fused_ordering(185) 00:13:39.810 fused_ordering(186) 00:13:39.810 fused_ordering(187) 00:13:39.810 fused_ordering(188) 00:13:39.810 fused_ordering(189) 00:13:39.810 fused_ordering(190) 00:13:39.810 fused_ordering(191) 00:13:39.810 fused_ordering(192) 00:13:39.810 fused_ordering(193) 00:13:39.810 fused_ordering(194) 00:13:39.810 fused_ordering(195) 00:13:39.810 fused_ordering(196) 00:13:39.810 fused_ordering(197) 00:13:39.810 fused_ordering(198) 00:13:39.810 fused_ordering(199) 00:13:39.810 fused_ordering(200) 00:13:39.810 fused_ordering(201) 00:13:39.810 fused_ordering(202) 00:13:39.810 fused_ordering(203) 00:13:39.810 fused_ordering(204) 00:13:39.810 fused_ordering(205) 00:13:40.071 fused_ordering(206) 00:13:40.071 fused_ordering(207) 00:13:40.071 fused_ordering(208) 00:13:40.071 fused_ordering(209) 00:13:40.071 fused_ordering(210) 00:13:40.071 fused_ordering(211) 00:13:40.071 fused_ordering(212) 00:13:40.071 fused_ordering(213) 00:13:40.071 fused_ordering(214) 00:13:40.071 fused_ordering(215) 00:13:40.071 fused_ordering(216) 00:13:40.071 fused_ordering(217) 00:13:40.071 fused_ordering(218) 00:13:40.071 fused_ordering(219) 00:13:40.071 fused_ordering(220) 00:13:40.071 fused_ordering(221) 00:13:40.071 fused_ordering(222) 00:13:40.071 fused_ordering(223) 00:13:40.071 fused_ordering(224) 00:13:40.071 fused_ordering(225) 00:13:40.071 fused_ordering(226) 00:13:40.071 fused_ordering(227) 00:13:40.071 fused_ordering(228) 00:13:40.071 fused_ordering(229) 00:13:40.071 fused_ordering(230) 00:13:40.071 fused_ordering(231) 00:13:40.071 fused_ordering(232) 00:13:40.071 fused_ordering(233) 00:13:40.071 fused_ordering(234) 00:13:40.071 fused_ordering(235) 00:13:40.071 fused_ordering(236) 00:13:40.071 fused_ordering(237) 00:13:40.071 fused_ordering(238) 00:13:40.071 fused_ordering(239) 00:13:40.071 fused_ordering(240) 00:13:40.071 fused_ordering(241) 00:13:40.071 fused_ordering(242) 00:13:40.071 fused_ordering(243) 00:13:40.071 fused_ordering(244) 00:13:40.071 fused_ordering(245) 00:13:40.071 fused_ordering(246) 00:13:40.071 fused_ordering(247) 00:13:40.071 fused_ordering(248) 00:13:40.071 fused_ordering(249) 00:13:40.071 fused_ordering(250) 00:13:40.071 fused_ordering(251) 00:13:40.071 fused_ordering(252) 00:13:40.071 fused_ordering(253) 00:13:40.071 fused_ordering(254) 00:13:40.071 fused_ordering(255) 00:13:40.071 fused_ordering(256) 00:13:40.071 fused_ordering(257) 00:13:40.071 fused_ordering(258) 00:13:40.071 fused_ordering(259) 00:13:40.071 fused_ordering(260) 00:13:40.071 fused_ordering(261) 00:13:40.071 fused_ordering(262) 00:13:40.071 fused_ordering(263) 00:13:40.071 fused_ordering(264) 00:13:40.071 fused_ordering(265) 00:13:40.071 fused_ordering(266) 00:13:40.071 fused_ordering(267) 00:13:40.071 fused_ordering(268) 00:13:40.071 fused_ordering(269) 00:13:40.071 fused_ordering(270) 00:13:40.071 fused_ordering(271) 00:13:40.071 fused_ordering(272) 00:13:40.071 fused_ordering(273) 00:13:40.071 fused_ordering(274) 00:13:40.071 fused_ordering(275) 00:13:40.071 fused_ordering(276) 00:13:40.071 fused_ordering(277) 00:13:40.071 fused_ordering(278) 00:13:40.071 fused_ordering(279) 00:13:40.071 fused_ordering(280) 00:13:40.071 fused_ordering(281) 00:13:40.071 fused_ordering(282) 00:13:40.071 fused_ordering(283) 00:13:40.071 fused_ordering(284) 00:13:40.071 fused_ordering(285) 00:13:40.071 fused_ordering(286) 00:13:40.071 fused_ordering(287) 00:13:40.071 fused_ordering(288) 00:13:40.071 fused_ordering(289) 00:13:40.071 fused_ordering(290) 00:13:40.071 fused_ordering(291) 00:13:40.071 fused_ordering(292) 00:13:40.071 fused_ordering(293) 00:13:40.071 fused_ordering(294) 00:13:40.071 fused_ordering(295) 00:13:40.071 fused_ordering(296) 00:13:40.071 fused_ordering(297) 00:13:40.071 fused_ordering(298) 00:13:40.071 fused_ordering(299) 00:13:40.071 fused_ordering(300) 00:13:40.071 fused_ordering(301) 00:13:40.071 fused_ordering(302) 00:13:40.071 fused_ordering(303) 00:13:40.071 fused_ordering(304) 00:13:40.071 fused_ordering(305) 00:13:40.071 fused_ordering(306) 00:13:40.071 fused_ordering(307) 00:13:40.071 fused_ordering(308) 00:13:40.071 fused_ordering(309) 00:13:40.071 fused_ordering(310) 00:13:40.071 fused_ordering(311) 00:13:40.071 fused_ordering(312) 00:13:40.071 fused_ordering(313) 00:13:40.071 fused_ordering(314) 00:13:40.071 fused_ordering(315) 00:13:40.071 fused_ordering(316) 00:13:40.071 fused_ordering(317) 00:13:40.071 fused_ordering(318) 00:13:40.071 fused_ordering(319) 00:13:40.071 fused_ordering(320) 00:13:40.071 fused_ordering(321) 00:13:40.071 fused_ordering(322) 00:13:40.071 fused_ordering(323) 00:13:40.071 fused_ordering(324) 00:13:40.071 fused_ordering(325) 00:13:40.071 fused_ordering(326) 00:13:40.071 fused_ordering(327) 00:13:40.071 fused_ordering(328) 00:13:40.071 fused_ordering(329) 00:13:40.071 fused_ordering(330) 00:13:40.071 fused_ordering(331) 00:13:40.071 fused_ordering(332) 00:13:40.071 fused_ordering(333) 00:13:40.071 fused_ordering(334) 00:13:40.071 fused_ordering(335) 00:13:40.071 fused_ordering(336) 00:13:40.071 fused_ordering(337) 00:13:40.071 fused_ordering(338) 00:13:40.071 fused_ordering(339) 00:13:40.071 fused_ordering(340) 00:13:40.071 fused_ordering(341) 00:13:40.071 fused_ordering(342) 00:13:40.071 fused_ordering(343) 00:13:40.071 fused_ordering(344) 00:13:40.071 fused_ordering(345) 00:13:40.071 fused_ordering(346) 00:13:40.071 fused_ordering(347) 00:13:40.071 fused_ordering(348) 00:13:40.071 fused_ordering(349) 00:13:40.071 fused_ordering(350) 00:13:40.071 fused_ordering(351) 00:13:40.071 fused_ordering(352) 00:13:40.071 fused_ordering(353) 00:13:40.071 fused_ordering(354) 00:13:40.071 fused_ordering(355) 00:13:40.071 fused_ordering(356) 00:13:40.071 fused_ordering(357) 00:13:40.071 fused_ordering(358) 00:13:40.071 fused_ordering(359) 00:13:40.071 fused_ordering(360) 00:13:40.071 fused_ordering(361) 00:13:40.071 fused_ordering(362) 00:13:40.071 fused_ordering(363) 00:13:40.071 fused_ordering(364) 00:13:40.071 fused_ordering(365) 00:13:40.071 fused_ordering(366) 00:13:40.071 fused_ordering(367) 00:13:40.071 fused_ordering(368) 00:13:40.071 fused_ordering(369) 00:13:40.071 fused_ordering(370) 00:13:40.071 fused_ordering(371) 00:13:40.071 fused_ordering(372) 00:13:40.071 fused_ordering(373) 00:13:40.071 fused_ordering(374) 00:13:40.071 fused_ordering(375) 00:13:40.071 fused_ordering(376) 00:13:40.071 fused_ordering(377) 00:13:40.071 fused_ordering(378) 00:13:40.071 fused_ordering(379) 00:13:40.071 fused_ordering(380) 00:13:40.071 fused_ordering(381) 00:13:40.071 fused_ordering(382) 00:13:40.071 fused_ordering(383) 00:13:40.071 fused_ordering(384) 00:13:40.071 fused_ordering(385) 00:13:40.071 fused_ordering(386) 00:13:40.071 fused_ordering(387) 00:13:40.071 fused_ordering(388) 00:13:40.071 fused_ordering(389) 00:13:40.071 fused_ordering(390) 00:13:40.071 fused_ordering(391) 00:13:40.071 fused_ordering(392) 00:13:40.071 fused_ordering(393) 00:13:40.071 fused_ordering(394) 00:13:40.071 fused_ordering(395) 00:13:40.071 fused_ordering(396) 00:13:40.071 fused_ordering(397) 00:13:40.071 fused_ordering(398) 00:13:40.071 fused_ordering(399) 00:13:40.071 fused_ordering(400) 00:13:40.071 fused_ordering(401) 00:13:40.071 fused_ordering(402) 00:13:40.071 fused_ordering(403) 00:13:40.071 fused_ordering(404) 00:13:40.071 fused_ordering(405) 00:13:40.071 fused_ordering(406) 00:13:40.071 fused_ordering(407) 00:13:40.071 fused_ordering(408) 00:13:40.071 fused_ordering(409) 00:13:40.071 fused_ordering(410) 00:13:40.643 fused_ordering(411) 00:13:40.643 fused_ordering(412) 00:13:40.643 fused_ordering(413) 00:13:40.643 fused_ordering(414) 00:13:40.643 fused_ordering(415) 00:13:40.643 fused_ordering(416) 00:13:40.643 fused_ordering(417) 00:13:40.643 fused_ordering(418) 00:13:40.643 fused_ordering(419) 00:13:40.643 fused_ordering(420) 00:13:40.643 fused_ordering(421) 00:13:40.643 fused_ordering(422) 00:13:40.643 fused_ordering(423) 00:13:40.643 fused_ordering(424) 00:13:40.643 fused_ordering(425) 00:13:40.643 fused_ordering(426) 00:13:40.643 fused_ordering(427) 00:13:40.643 fused_ordering(428) 00:13:40.643 fused_ordering(429) 00:13:40.643 fused_ordering(430) 00:13:40.643 fused_ordering(431) 00:13:40.643 fused_ordering(432) 00:13:40.643 fused_ordering(433) 00:13:40.643 fused_ordering(434) 00:13:40.643 fused_ordering(435) 00:13:40.643 fused_ordering(436) 00:13:40.643 fused_ordering(437) 00:13:40.643 fused_ordering(438) 00:13:40.643 fused_ordering(439) 00:13:40.643 fused_ordering(440) 00:13:40.643 fused_ordering(441) 00:13:40.643 fused_ordering(442) 00:13:40.643 fused_ordering(443) 00:13:40.643 fused_ordering(444) 00:13:40.643 fused_ordering(445) 00:13:40.643 fused_ordering(446) 00:13:40.643 fused_ordering(447) 00:13:40.643 fused_ordering(448) 00:13:40.643 fused_ordering(449) 00:13:40.643 fused_ordering(450) 00:13:40.643 fused_ordering(451) 00:13:40.643 fused_ordering(452) 00:13:40.643 fused_ordering(453) 00:13:40.643 fused_ordering(454) 00:13:40.643 fused_ordering(455) 00:13:40.643 fused_ordering(456) 00:13:40.643 fused_ordering(457) 00:13:40.643 fused_ordering(458) 00:13:40.643 fused_ordering(459) 00:13:40.643 fused_ordering(460) 00:13:40.643 fused_ordering(461) 00:13:40.643 fused_ordering(462) 00:13:40.643 fused_ordering(463) 00:13:40.643 fused_ordering(464) 00:13:40.643 fused_ordering(465) 00:13:40.643 fused_ordering(466) 00:13:40.643 fused_ordering(467) 00:13:40.643 fused_ordering(468) 00:13:40.643 fused_ordering(469) 00:13:40.643 fused_ordering(470) 00:13:40.643 fused_ordering(471) 00:13:40.643 fused_ordering(472) 00:13:40.643 fused_ordering(473) 00:13:40.643 fused_ordering(474) 00:13:40.643 fused_ordering(475) 00:13:40.643 fused_ordering(476) 00:13:40.643 fused_ordering(477) 00:13:40.643 fused_ordering(478) 00:13:40.643 fused_ordering(479) 00:13:40.643 fused_ordering(480) 00:13:40.643 fused_ordering(481) 00:13:40.643 fused_ordering(482) 00:13:40.643 fused_ordering(483) 00:13:40.643 fused_ordering(484) 00:13:40.643 fused_ordering(485) 00:13:40.643 fused_ordering(486) 00:13:40.643 fused_ordering(487) 00:13:40.643 fused_ordering(488) 00:13:40.643 fused_ordering(489) 00:13:40.643 fused_ordering(490) 00:13:40.643 fused_ordering(491) 00:13:40.643 fused_ordering(492) 00:13:40.643 fused_ordering(493) 00:13:40.643 fused_ordering(494) 00:13:40.643 fused_ordering(495) 00:13:40.643 fused_ordering(496) 00:13:40.643 fused_ordering(497) 00:13:40.643 fused_ordering(498) 00:13:40.643 fused_ordering(499) 00:13:40.643 fused_ordering(500) 00:13:40.643 fused_ordering(501) 00:13:40.643 fused_ordering(502) 00:13:40.643 fused_ordering(503) 00:13:40.643 fused_ordering(504) 00:13:40.643 fused_ordering(505) 00:13:40.643 fused_ordering(506) 00:13:40.643 fused_ordering(507) 00:13:40.643 fused_ordering(508) 00:13:40.643 fused_ordering(509) 00:13:40.643 fused_ordering(510) 00:13:40.643 fused_ordering(511) 00:13:40.643 fused_ordering(512) 00:13:40.643 fused_ordering(513) 00:13:40.643 fused_ordering(514) 00:13:40.643 fused_ordering(515) 00:13:40.643 fused_ordering(516) 00:13:40.643 fused_ordering(517) 00:13:40.643 fused_ordering(518) 00:13:40.643 fused_ordering(519) 00:13:40.643 fused_ordering(520) 00:13:40.643 fused_ordering(521) 00:13:40.643 fused_ordering(522) 00:13:40.643 fused_ordering(523) 00:13:40.643 fused_ordering(524) 00:13:40.643 fused_ordering(525) 00:13:40.643 fused_ordering(526) 00:13:40.643 fused_ordering(527) 00:13:40.643 fused_ordering(528) 00:13:40.643 fused_ordering(529) 00:13:40.643 fused_ordering(530) 00:13:40.643 fused_ordering(531) 00:13:40.643 fused_ordering(532) 00:13:40.643 fused_ordering(533) 00:13:40.643 fused_ordering(534) 00:13:40.643 fused_ordering(535) 00:13:40.643 fused_ordering(536) 00:13:40.643 fused_ordering(537) 00:13:40.643 fused_ordering(538) 00:13:40.643 fused_ordering(539) 00:13:40.643 fused_ordering(540) 00:13:40.643 fused_ordering(541) 00:13:40.643 fused_ordering(542) 00:13:40.643 fused_ordering(543) 00:13:40.643 fused_ordering(544) 00:13:40.643 fused_ordering(545) 00:13:40.643 fused_ordering(546) 00:13:40.643 fused_ordering(547) 00:13:40.643 fused_ordering(548) 00:13:40.643 fused_ordering(549) 00:13:40.643 fused_ordering(550) 00:13:40.643 fused_ordering(551) 00:13:40.643 fused_ordering(552) 00:13:40.643 fused_ordering(553) 00:13:40.643 fused_ordering(554) 00:13:40.643 fused_ordering(555) 00:13:40.643 fused_ordering(556) 00:13:40.643 fused_ordering(557) 00:13:40.643 fused_ordering(558) 00:13:40.643 fused_ordering(559) 00:13:40.643 fused_ordering(560) 00:13:40.643 fused_ordering(561) 00:13:40.643 fused_ordering(562) 00:13:40.643 fused_ordering(563) 00:13:40.643 fused_ordering(564) 00:13:40.643 fused_ordering(565) 00:13:40.643 fused_ordering(566) 00:13:40.643 fused_ordering(567) 00:13:40.643 fused_ordering(568) 00:13:40.643 fused_ordering(569) 00:13:40.643 fused_ordering(570) 00:13:40.643 fused_ordering(571) 00:13:40.643 fused_ordering(572) 00:13:40.643 fused_ordering(573) 00:13:40.643 fused_ordering(574) 00:13:40.643 fused_ordering(575) 00:13:40.643 fused_ordering(576) 00:13:40.643 fused_ordering(577) 00:13:40.643 fused_ordering(578) 00:13:40.643 fused_ordering(579) 00:13:40.643 fused_ordering(580) 00:13:40.643 fused_ordering(581) 00:13:40.644 fused_ordering(582) 00:13:40.644 fused_ordering(583) 00:13:40.644 fused_ordering(584) 00:13:40.644 fused_ordering(585) 00:13:40.644 fused_ordering(586) 00:13:40.644 fused_ordering(587) 00:13:40.644 fused_ordering(588) 00:13:40.644 fused_ordering(589) 00:13:40.644 fused_ordering(590) 00:13:40.644 fused_ordering(591) 00:13:40.644 fused_ordering(592) 00:13:40.644 fused_ordering(593) 00:13:40.644 fused_ordering(594) 00:13:40.644 fused_ordering(595) 00:13:40.644 fused_ordering(596) 00:13:40.644 fused_ordering(597) 00:13:40.644 fused_ordering(598) 00:13:40.644 fused_ordering(599) 00:13:40.644 fused_ordering(600) 00:13:40.644 fused_ordering(601) 00:13:40.644 fused_ordering(602) 00:13:40.644 fused_ordering(603) 00:13:40.644 fused_ordering(604) 00:13:40.644 fused_ordering(605) 00:13:40.644 fused_ordering(606) 00:13:40.644 fused_ordering(607) 00:13:40.644 fused_ordering(608) 00:13:40.644 fused_ordering(609) 00:13:40.644 fused_ordering(610) 00:13:40.644 fused_ordering(611) 00:13:40.644 fused_ordering(612) 00:13:40.644 fused_ordering(613) 00:13:40.644 fused_ordering(614) 00:13:40.644 fused_ordering(615) 00:13:41.215 fused_ordering(616) 00:13:41.215 fused_ordering(617) 00:13:41.215 fused_ordering(618) 00:13:41.215 fused_ordering(619) 00:13:41.215 fused_ordering(620) 00:13:41.215 fused_ordering(621) 00:13:41.215 fused_ordering(622) 00:13:41.215 fused_ordering(623) 00:13:41.215 fused_ordering(624) 00:13:41.215 fused_ordering(625) 00:13:41.215 fused_ordering(626) 00:13:41.215 fused_ordering(627) 00:13:41.215 fused_ordering(628) 00:13:41.215 fused_ordering(629) 00:13:41.215 fused_ordering(630) 00:13:41.215 fused_ordering(631) 00:13:41.215 fused_ordering(632) 00:13:41.215 fused_ordering(633) 00:13:41.215 fused_ordering(634) 00:13:41.215 fused_ordering(635) 00:13:41.215 fused_ordering(636) 00:13:41.215 fused_ordering(637) 00:13:41.215 fused_ordering(638) 00:13:41.215 fused_ordering(639) 00:13:41.215 fused_ordering(640) 00:13:41.215 fused_ordering(641) 00:13:41.215 fused_ordering(642) 00:13:41.215 fused_ordering(643) 00:13:41.215 fused_ordering(644) 00:13:41.215 fused_ordering(645) 00:13:41.215 fused_ordering(646) 00:13:41.215 fused_ordering(647) 00:13:41.215 fused_ordering(648) 00:13:41.215 fused_ordering(649) 00:13:41.215 fused_ordering(650) 00:13:41.215 fused_ordering(651) 00:13:41.215 fused_ordering(652) 00:13:41.215 fused_ordering(653) 00:13:41.215 fused_ordering(654) 00:13:41.215 fused_ordering(655) 00:13:41.215 fused_ordering(656) 00:13:41.215 fused_ordering(657) 00:13:41.215 fused_ordering(658) 00:13:41.215 fused_ordering(659) 00:13:41.215 fused_ordering(660) 00:13:41.215 fused_ordering(661) 00:13:41.215 fused_ordering(662) 00:13:41.215 fused_ordering(663) 00:13:41.215 fused_ordering(664) 00:13:41.215 fused_ordering(665) 00:13:41.215 fused_ordering(666) 00:13:41.215 fused_ordering(667) 00:13:41.215 fused_ordering(668) 00:13:41.215 fused_ordering(669) 00:13:41.215 fused_ordering(670) 00:13:41.215 fused_ordering(671) 00:13:41.215 fused_ordering(672) 00:13:41.215 fused_ordering(673) 00:13:41.215 fused_ordering(674) 00:13:41.215 fused_ordering(675) 00:13:41.215 fused_ordering(676) 00:13:41.215 fused_ordering(677) 00:13:41.215 fused_ordering(678) 00:13:41.215 fused_ordering(679) 00:13:41.215 fused_ordering(680) 00:13:41.215 fused_ordering(681) 00:13:41.215 fused_ordering(682) 00:13:41.215 fused_ordering(683) 00:13:41.215 fused_ordering(684) 00:13:41.215 fused_ordering(685) 00:13:41.215 fused_ordering(686) 00:13:41.215 fused_ordering(687) 00:13:41.215 fused_ordering(688) 00:13:41.215 fused_ordering(689) 00:13:41.215 fused_ordering(690) 00:13:41.215 fused_ordering(691) 00:13:41.215 fused_ordering(692) 00:13:41.215 fused_ordering(693) 00:13:41.215 fused_ordering(694) 00:13:41.215 fused_ordering(695) 00:13:41.215 fused_ordering(696) 00:13:41.215 fused_ordering(697) 00:13:41.215 fused_ordering(698) 00:13:41.215 fused_ordering(699) 00:13:41.215 fused_ordering(700) 00:13:41.215 fused_ordering(701) 00:13:41.215 fused_ordering(702) 00:13:41.215 fused_ordering(703) 00:13:41.215 fused_ordering(704) 00:13:41.215 fused_ordering(705) 00:13:41.215 fused_ordering(706) 00:13:41.215 fused_ordering(707) 00:13:41.215 fused_ordering(708) 00:13:41.215 fused_ordering(709) 00:13:41.215 fused_ordering(710) 00:13:41.215 fused_ordering(711) 00:13:41.215 fused_ordering(712) 00:13:41.215 fused_ordering(713) 00:13:41.215 fused_ordering(714) 00:13:41.215 fused_ordering(715) 00:13:41.215 fused_ordering(716) 00:13:41.215 fused_ordering(717) 00:13:41.215 fused_ordering(718) 00:13:41.215 fused_ordering(719) 00:13:41.215 fused_ordering(720) 00:13:41.215 fused_ordering(721) 00:13:41.215 fused_ordering(722) 00:13:41.215 fused_ordering(723) 00:13:41.215 fused_ordering(724) 00:13:41.215 fused_ordering(725) 00:13:41.215 fused_ordering(726) 00:13:41.215 fused_ordering(727) 00:13:41.215 fused_ordering(728) 00:13:41.215 fused_ordering(729) 00:13:41.215 fused_ordering(730) 00:13:41.215 fused_ordering(731) 00:13:41.215 fused_ordering(732) 00:13:41.215 fused_ordering(733) 00:13:41.215 fused_ordering(734) 00:13:41.215 fused_ordering(735) 00:13:41.215 fused_ordering(736) 00:13:41.216 fused_ordering(737) 00:13:41.216 fused_ordering(738) 00:13:41.216 fused_ordering(739) 00:13:41.216 fused_ordering(740) 00:13:41.216 fused_ordering(741) 00:13:41.216 fused_ordering(742) 00:13:41.216 fused_ordering(743) 00:13:41.216 fused_ordering(744) 00:13:41.216 fused_ordering(745) 00:13:41.216 fused_ordering(746) 00:13:41.216 fused_ordering(747) 00:13:41.216 fused_ordering(748) 00:13:41.216 fused_ordering(749) 00:13:41.216 fused_ordering(750) 00:13:41.216 fused_ordering(751) 00:13:41.216 fused_ordering(752) 00:13:41.216 fused_ordering(753) 00:13:41.216 fused_ordering(754) 00:13:41.216 fused_ordering(755) 00:13:41.216 fused_ordering(756) 00:13:41.216 fused_ordering(757) 00:13:41.216 fused_ordering(758) 00:13:41.216 fused_ordering(759) 00:13:41.216 fused_ordering(760) 00:13:41.216 fused_ordering(761) 00:13:41.216 fused_ordering(762) 00:13:41.216 fused_ordering(763) 00:13:41.216 fused_ordering(764) 00:13:41.216 fused_ordering(765) 00:13:41.216 fused_ordering(766) 00:13:41.216 fused_ordering(767) 00:13:41.216 fused_ordering(768) 00:13:41.216 fused_ordering(769) 00:13:41.216 fused_ordering(770) 00:13:41.216 fused_ordering(771) 00:13:41.216 fused_ordering(772) 00:13:41.216 fused_ordering(773) 00:13:41.216 fused_ordering(774) 00:13:41.216 fused_ordering(775) 00:13:41.216 fused_ordering(776) 00:13:41.216 fused_ordering(777) 00:13:41.216 fused_ordering(778) 00:13:41.216 fused_ordering(779) 00:13:41.216 fused_ordering(780) 00:13:41.216 fused_ordering(781) 00:13:41.216 fused_ordering(782) 00:13:41.216 fused_ordering(783) 00:13:41.216 fused_ordering(784) 00:13:41.216 fused_ordering(785) 00:13:41.216 fused_ordering(786) 00:13:41.216 fused_ordering(787) 00:13:41.216 fused_ordering(788) 00:13:41.216 fused_ordering(789) 00:13:41.216 fused_ordering(790) 00:13:41.216 fused_ordering(791) 00:13:41.216 fused_ordering(792) 00:13:41.216 fused_ordering(793) 00:13:41.216 fused_ordering(794) 00:13:41.216 fused_ordering(795) 00:13:41.216 fused_ordering(796) 00:13:41.216 fused_ordering(797) 00:13:41.216 fused_ordering(798) 00:13:41.216 fused_ordering(799) 00:13:41.216 fused_ordering(800) 00:13:41.216 fused_ordering(801) 00:13:41.216 fused_ordering(802) 00:13:41.216 fused_ordering(803) 00:13:41.216 fused_ordering(804) 00:13:41.216 fused_ordering(805) 00:13:41.216 fused_ordering(806) 00:13:41.216 fused_ordering(807) 00:13:41.216 fused_ordering(808) 00:13:41.216 fused_ordering(809) 00:13:41.216 fused_ordering(810) 00:13:41.216 fused_ordering(811) 00:13:41.216 fused_ordering(812) 00:13:41.216 fused_ordering(813) 00:13:41.216 fused_ordering(814) 00:13:41.216 fused_ordering(815) 00:13:41.216 fused_ordering(816) 00:13:41.216 fused_ordering(817) 00:13:41.216 fused_ordering(818) 00:13:41.216 fused_ordering(819) 00:13:41.216 fused_ordering(820) 00:13:41.788 fused_o[2024-10-14 14:27:22.235285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f8570 is same with the state(6) to be set 00:13:41.788 rdering(821) 00:13:41.788 fused_ordering(822) 00:13:41.788 fused_ordering(823) 00:13:41.788 fused_ordering(824) 00:13:41.788 fused_ordering(825) 00:13:41.788 fused_ordering(826) 00:13:41.788 fused_ordering(827) 00:13:41.788 fused_ordering(828) 00:13:41.788 fused_ordering(829) 00:13:41.788 fused_ordering(830) 00:13:41.788 fused_ordering(831) 00:13:41.788 fused_ordering(832) 00:13:41.788 fused_ordering(833) 00:13:41.788 fused_ordering(834) 00:13:41.788 fused_ordering(835) 00:13:41.788 fused_ordering(836) 00:13:41.788 fused_ordering(837) 00:13:41.788 fused_ordering(838) 00:13:41.788 fused_ordering(839) 00:13:41.788 fused_ordering(840) 00:13:41.788 fused_ordering(841) 00:13:41.788 fused_ordering(842) 00:13:41.788 fused_ordering(843) 00:13:41.788 fused_ordering(844) 00:13:41.788 fused_ordering(845) 00:13:41.788 fused_ordering(846) 00:13:41.788 fused_ordering(847) 00:13:41.788 fused_ordering(848) 00:13:41.788 fused_ordering(849) 00:13:41.788 fused_ordering(850) 00:13:41.788 fused_ordering(851) 00:13:41.788 fused_ordering(852) 00:13:41.788 fused_ordering(853) 00:13:41.788 fused_ordering(854) 00:13:41.788 fused_ordering(855) 00:13:41.788 fused_ordering(856) 00:13:41.788 fused_ordering(857) 00:13:41.788 fused_ordering(858) 00:13:41.788 fused_ordering(859) 00:13:41.788 fused_ordering(860) 00:13:41.788 fused_ordering(861) 00:13:41.788 fused_ordering(862) 00:13:41.788 fused_ordering(863) 00:13:41.788 fused_ordering(864) 00:13:41.788 fused_ordering(865) 00:13:41.788 fused_ordering(866) 00:13:41.788 fused_ordering(867) 00:13:41.788 fused_ordering(868) 00:13:41.788 fused_ordering(869) 00:13:41.788 fused_ordering(870) 00:13:41.788 fused_ordering(871) 00:13:41.788 fused_ordering(872) 00:13:41.788 fused_ordering(873) 00:13:41.788 fused_ordering(874) 00:13:41.788 fused_ordering(875) 00:13:41.788 fused_ordering(876) 00:13:41.788 fused_ordering(877) 00:13:41.788 fused_ordering(878) 00:13:41.788 fused_ordering(879) 00:13:41.788 fused_ordering(880) 00:13:41.788 fused_ordering(881) 00:13:41.788 fused_ordering(882) 00:13:41.788 fused_ordering(883) 00:13:41.788 fused_ordering(884) 00:13:41.788 fused_ordering(885) 00:13:41.788 fused_ordering(886) 00:13:41.788 fused_ordering(887) 00:13:41.788 fused_ordering(888) 00:13:41.788 fused_ordering(889) 00:13:41.788 fused_ordering(890) 00:13:41.788 fused_ordering(891) 00:13:41.788 fused_ordering(892) 00:13:41.788 fused_ordering(893) 00:13:41.788 fused_ordering(894) 00:13:41.788 fused_ordering(895) 00:13:41.788 fused_ordering(896) 00:13:41.788 fused_ordering(897) 00:13:41.788 fused_ordering(898) 00:13:41.788 fused_ordering(899) 00:13:41.788 fused_ordering(900) 00:13:41.788 fused_ordering(901) 00:13:41.788 fused_ordering(902) 00:13:41.788 fused_ordering(903) 00:13:41.788 fused_ordering(904) 00:13:41.788 fused_ordering(905) 00:13:41.788 fused_ordering(906) 00:13:41.788 fused_ordering(907) 00:13:41.788 fused_ordering(908) 00:13:41.788 fused_ordering(909) 00:13:41.788 fused_ordering(910) 00:13:41.788 fused_ordering(911) 00:13:41.788 fused_ordering(912) 00:13:41.788 fused_ordering(913) 00:13:41.788 fused_ordering(914) 00:13:41.788 fused_ordering(915) 00:13:41.788 fused_ordering(916) 00:13:41.788 fused_ordering(917) 00:13:41.788 fused_ordering(918) 00:13:41.788 fused_ordering(919) 00:13:41.788 fused_ordering(920) 00:13:41.788 fused_ordering(921) 00:13:41.788 fused_ordering(922) 00:13:41.788 fused_ordering(923) 00:13:41.788 fused_ordering(924) 00:13:41.788 fused_ordering(925) 00:13:41.788 fused_ordering(926) 00:13:41.788 fused_ordering(927) 00:13:41.788 fused_ordering(928) 00:13:41.788 fused_ordering(929) 00:13:41.788 fused_ordering(930) 00:13:41.788 fused_ordering(931) 00:13:41.788 fused_ordering(932) 00:13:41.788 fused_ordering(933) 00:13:41.788 fused_ordering(934) 00:13:41.788 fused_ordering(935) 00:13:41.788 fused_ordering(936) 00:13:41.788 fused_ordering(937) 00:13:41.788 fused_ordering(938) 00:13:41.788 fused_ordering(939) 00:13:41.788 fused_ordering(940) 00:13:41.788 fused_ordering(941) 00:13:41.788 fused_ordering(942) 00:13:41.788 fused_ordering(943) 00:13:41.788 fused_ordering(944) 00:13:41.788 fused_ordering(945) 00:13:41.788 fused_ordering(946) 00:13:41.788 fused_ordering(947) 00:13:41.788 fused_ordering(948) 00:13:41.788 fused_ordering(949) 00:13:41.788 fused_ordering(950) 00:13:41.788 fused_ordering(951) 00:13:41.788 fused_ordering(952) 00:13:41.788 fused_ordering(953) 00:13:41.788 fused_ordering(954) 00:13:41.788 fused_ordering(955) 00:13:41.788 fused_ordering(956) 00:13:41.788 fused_ordering(957) 00:13:41.788 fused_ordering(958) 00:13:41.788 fused_ordering(959) 00:13:41.788 fused_ordering(960) 00:13:41.788 fused_ordering(961) 00:13:41.788 fused_ordering(962) 00:13:41.788 fused_ordering(963) 00:13:41.788 fused_ordering(964) 00:13:41.788 fused_ordering(965) 00:13:41.788 fused_ordering(966) 00:13:41.788 fused_ordering(967) 00:13:41.788 fused_ordering(968) 00:13:41.788 fused_ordering(969) 00:13:41.788 fused_ordering(970) 00:13:41.788 fused_ordering(971) 00:13:41.788 fused_ordering(972) 00:13:41.788 fused_ordering(973) 00:13:41.788 fused_ordering(974) 00:13:41.788 fused_ordering(975) 00:13:41.788 fused_ordering(976) 00:13:41.788 fused_ordering(977) 00:13:41.788 fused_ordering(978) 00:13:41.788 fused_ordering(979) 00:13:41.788 fused_ordering(980) 00:13:41.788 fused_ordering(981) 00:13:41.788 fused_ordering(982) 00:13:41.788 fused_ordering(983) 00:13:41.788 fused_ordering(984) 00:13:41.788 fused_ordering(985) 00:13:41.788 fused_ordering(986) 00:13:41.788 fused_ordering(987) 00:13:41.788 fused_ordering(988) 00:13:41.788 fused_ordering(989) 00:13:41.788 fused_ordering(990) 00:13:41.788 fused_ordering(991) 00:13:41.788 fused_ordering(992) 00:13:41.788 fused_ordering(993) 00:13:41.788 fused_ordering(994) 00:13:41.788 fused_ordering(995) 00:13:41.788 fused_ordering(996) 00:13:41.788 fused_ordering(997) 00:13:41.788 fused_ordering(998) 00:13:41.788 fused_ordering(999) 00:13:41.788 fused_ordering(1000) 00:13:41.788 fused_ordering(1001) 00:13:41.788 fused_ordering(1002) 00:13:41.788 fused_ordering(1003) 00:13:41.788 fused_ordering(1004) 00:13:41.788 fused_ordering(1005) 00:13:41.788 fused_ordering(1006) 00:13:41.788 fused_ordering(1007) 00:13:41.788 fused_ordering(1008) 00:13:41.788 fused_ordering(1009) 00:13:41.788 fused_ordering(1010) 00:13:41.788 fused_ordering(1011) 00:13:41.788 fused_ordering(1012) 00:13:41.788 fused_ordering(1013) 00:13:41.788 fused_ordering(1014) 00:13:41.788 fused_ordering(1015) 00:13:41.788 fused_ordering(1016) 00:13:41.788 fused_ordering(1017) 00:13:41.788 fused_ordering(1018) 00:13:41.788 fused_ordering(1019) 00:13:41.788 fused_ordering(1020) 00:13:41.788 fused_ordering(1021) 00:13:41.788 fused_ordering(1022) 00:13:41.788 fused_ordering(1023) 00:13:41.788 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:41.788 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:41.788 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:41.788 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:41.788 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:41.788 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:41.788 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:41.789 rmmod nvme_tcp 00:13:41.789 rmmod nvme_fabrics 00:13:41.789 rmmod nvme_keyring 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 3325825 ']' 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 3325825 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3325825 ']' 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3325825 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3325825 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3325825' 00:13:41.789 killing process with pid 3325825 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3325825 00:13:41.789 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3325825 00:13:42.050 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:42.050 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:42.050 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:42.050 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:42.050 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:13:42.050 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:42.050 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:13:42.050 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:42.050 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:42.050 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.050 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.050 14:27:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.964 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:43.964 00:13:43.964 real 0m13.281s 00:13:43.964 user 0m6.946s 00:13:43.964 sys 0m6.993s 00:13:43.964 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.964 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.964 ************************************ 00:13:43.964 END TEST nvmf_fused_ordering 00:13:43.964 ************************************ 00:13:43.964 14:27:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:43.964 14:27:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:43.964 14:27:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.964 14:27:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:44.226 ************************************ 00:13:44.226 START TEST nvmf_ns_masking 00:13:44.226 ************************************ 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:44.226 * Looking for test storage... 00:13:44.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:44.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.226 --rc genhtml_branch_coverage=1 00:13:44.226 --rc genhtml_function_coverage=1 00:13:44.226 --rc genhtml_legend=1 00:13:44.226 --rc geninfo_all_blocks=1 00:13:44.226 --rc geninfo_unexecuted_blocks=1 00:13:44.226 00:13:44.226 ' 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:44.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.226 --rc genhtml_branch_coverage=1 00:13:44.226 --rc genhtml_function_coverage=1 00:13:44.226 --rc genhtml_legend=1 00:13:44.226 --rc geninfo_all_blocks=1 00:13:44.226 --rc geninfo_unexecuted_blocks=1 00:13:44.226 00:13:44.226 ' 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:44.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.226 --rc genhtml_branch_coverage=1 00:13:44.226 --rc genhtml_function_coverage=1 00:13:44.226 --rc genhtml_legend=1 00:13:44.226 --rc geninfo_all_blocks=1 00:13:44.226 --rc geninfo_unexecuted_blocks=1 00:13:44.226 00:13:44.226 ' 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:44.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:44.226 --rc genhtml_branch_coverage=1 00:13:44.226 --rc genhtml_function_coverage=1 00:13:44.226 --rc genhtml_legend=1 00:13:44.226 --rc geninfo_all_blocks=1 00:13:44.226 --rc geninfo_unexecuted_blocks=1 00:13:44.226 00:13:44.226 ' 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.226 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:44.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:44.227 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=aeed4f7e-4dbf-474c-ad2f-4f8d7426591a 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=53a5e3f9-0a83-49b3-b5c9-5c73d79e3c7e 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a7f7ed77-cda8-49b0-aa54-e0dc999720cc 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:44.493 14:27:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:52.646 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:52.646 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:52.646 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:52.647 Found net devices under 0000:31:00.0: cvl_0_0 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:52.647 Found net devices under 0000:31:00.1: cvl_0_1 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:52.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:13:52.647 00:13:52.647 --- 10.0.0.2 ping statistics --- 00:13:52.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.647 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:13:52.647 00:13:52.647 --- 10.0.0.1 ping statistics --- 00:13:52.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.647 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=3330711 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 3330711 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3330711 ']' 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:52.647 14:27:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:52.647 [2024-10-14 14:27:32.584569] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:13:52.647 [2024-10-14 14:27:32.584635] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.647 [2024-10-14 14:27:32.659928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.647 [2024-10-14 14:27:32.701711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.647 [2024-10-14 14:27:32.701750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.647 [2024-10-14 14:27:32.701762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.647 [2024-10-14 14:27:32.701768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.647 [2024-10-14 14:27:32.701774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.647 [2024-10-14 14:27:32.702439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.908 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:52.908 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:52.908 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:52.908 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:52.908 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:52.908 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.908 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.908 [2024-10-14 14:27:33.588556] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.908 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:52.908 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:52.908 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:53.169 Malloc1 00:13:53.169 14:27:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:53.430 Malloc2 00:13:53.430 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:53.691 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:53.691 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.951 [2024-10-14 14:27:34.519859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.951 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:53.951 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7f7ed77-cda8-49b0-aa54-e0dc999720cc -a 10.0.0.2 -s 4420 -i 4 00:13:54.212 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:54.212 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:54.212 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:54.212 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:54.212 14:27:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:56.123 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:56.123 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:56.123 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:56.123 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:56.123 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:56.123 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:56.123 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:56.123 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:56.123 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:56.123 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:56.123 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:56.123 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:56.123 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:56.384 [ 0]:0x1 00:13:56.384 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:56.384 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:56.384 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d86602646e55435397e2de607fe691bb 00:13:56.384 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d86602646e55435397e2de607fe691bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:56.384 14:27:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:56.645 [ 0]:0x1 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d86602646e55435397e2de607fe691bb 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d86602646e55435397e2de607fe691bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:56.645 [ 1]:0x2 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=090eafdf1af447caab74212c41b50d65 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 090eafdf1af447caab74212c41b50d65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:56.645 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:56.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.906 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.166 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:57.166 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:57.166 14:27:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7f7ed77-cda8-49b0-aa54-e0dc999720cc -a 10.0.0.2 -s 4420 -i 4 00:13:57.427 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:57.427 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:57.427 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:57.427 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:57.427 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:57.427 14:27:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:59.973 [ 0]:0x2 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=090eafdf1af447caab74212c41b50d65 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 090eafdf1af447caab74212c41b50d65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:59.973 [ 0]:0x1 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d86602646e55435397e2de607fe691bb 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d86602646e55435397e2de607fe691bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:59.973 [ 1]:0x2 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=090eafdf1af447caab74212c41b50d65 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 090eafdf1af447caab74212c41b50d65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.973 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:00.234 [ 0]:0x2 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=090eafdf1af447caab74212c41b50d65 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 090eafdf1af447caab74212c41b50d65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:00.234 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:00.235 14:27:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:00.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.495 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:00.495 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:00.495 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a7f7ed77-cda8-49b0-aa54-e0dc999720cc -a 10.0.0.2 -s 4420 -i 4 00:14:00.754 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:00.754 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:00.754 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:00.754 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:00.754 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:00.754 14:27:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:02.665 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:02.666 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:02.666 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.666 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:02.666 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.666 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:02.666 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:02.666 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:02.926 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:02.926 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:02.926 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:02.926 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.926 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.926 [ 0]:0x1 00:14:02.926 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.926 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.926 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d86602646e55435397e2de607fe691bb 00:14:02.926 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d86602646e55435397e2de607fe691bb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.926 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:02.926 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.926 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:02.926 [ 1]:0x2 00:14:02.926 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.926 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=090eafdf1af447caab74212c41b50d65 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 090eafdf1af447caab74212c41b50d65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:03.186 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:03.446 [ 0]:0x2 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=090eafdf1af447caab74212c41b50d65 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 090eafdf1af447caab74212c41b50d65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:03.446 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:03.447 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:03.447 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.447 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.447 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.447 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.447 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.447 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.447 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.447 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:03.447 14:27:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:03.447 [2024-10-14 14:27:44.147336] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:03.447 request: 00:14:03.447 { 00:14:03.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:03.447 "nsid": 2, 00:14:03.447 "host": "nqn.2016-06.io.spdk:host1", 00:14:03.447 "method": "nvmf_ns_remove_host", 00:14:03.447 "req_id": 1 00:14:03.447 } 00:14:03.447 Got JSON-RPC error response 00:14:03.447 response: 00:14:03.447 { 00:14:03.447 "code": -32602, 00:14:03.447 "message": "Invalid parameters" 00:14:03.447 } 00:14:03.708 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:03.708 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:03.708 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:03.708 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:03.708 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:03.708 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:03.709 [ 0]:0x2 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=090eafdf1af447caab74212c41b50d65 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 090eafdf1af447caab74212c41b50d65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:03.709 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:03.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.970 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3333170 00:14:03.970 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.970 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:03.970 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3333170 /var/tmp/host.sock 00:14:03.970 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3333170 ']' 00:14:03.970 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:03.970 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:03.970 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:03.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:03.970 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:03.970 14:27:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:03.970 [2024-10-14 14:27:44.518426] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:14:03.970 [2024-10-14 14:27:44.518481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3333170 ] 00:14:03.970 [2024-10-14 14:27:44.596668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.971 [2024-10-14 14:27:44.632116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.911 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:04.911 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:04.911 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.911 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:04.911 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid aeed4f7e-4dbf-474c-ad2f-4f8d7426591a 00:14:04.911 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:04.911 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g AEED4F7E4DBF474CAD2F4F8D7426591A -i 00:14:05.172 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 53a5e3f9-0a83-49b3-b5c9-5c73d79e3c7e 00:14:05.172 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:05.172 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 53A5E3F90A8349B3B5C95C73D79E3C7E -i 00:14:05.433 14:27:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:05.433 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:05.694 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:05.694 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:05.955 nvme0n1 00:14:05.955 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:05.955 14:27:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:06.525 nvme1n2 00:14:06.525 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:06.525 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:06.525 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:06.525 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:06.525 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:06.525 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:06.786 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:06.786 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:06.786 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:06.786 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ aeed4f7e-4dbf-474c-ad2f-4f8d7426591a == \a\e\e\d\4\f\7\e\-\4\d\b\f\-\4\7\4\c\-\a\d\2\f\-\4\f\8\d\7\4\2\6\5\9\1\a ]] 00:14:06.786 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:06.786 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:06.786 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:07.047 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 53a5e3f9-0a83-49b3-b5c9-5c73d79e3c7e == \5\3\a\5\e\3\f\9\-\0\a\8\3\-\4\9\b\3\-\b\5\c\9\-\5\c\7\3\d\7\9\e\3\c\7\e ]] 00:14:07.047 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3333170 00:14:07.047 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3333170 ']' 00:14:07.047 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3333170 00:14:07.047 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:07.047 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:07.047 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3333170 00:14:07.047 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:07.047 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:07.047 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3333170' 00:14:07.047 killing process with pid 3333170 00:14:07.047 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3333170 00:14:07.047 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3333170 00:14:07.308 14:27:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:07.569 rmmod nvme_tcp 00:14:07.569 rmmod nvme_fabrics 00:14:07.569 rmmod nvme_keyring 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 3330711 ']' 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 3330711 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3330711 ']' 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3330711 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3330711 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3330711' 00:14:07.569 killing process with pid 3330711 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3330711 00:14:07.569 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3330711 00:14:07.830 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:07.830 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:07.830 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:07.830 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:07.830 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:14:07.830 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:07.830 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:14:07.830 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:07.830 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:07.830 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.830 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.830 14:27:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.744 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:09.744 00:14:09.744 real 0m25.687s 00:14:09.744 user 0m25.888s 00:14:09.744 sys 0m7.809s 00:14:09.744 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.744 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:09.744 ************************************ 00:14:09.744 END TEST nvmf_ns_masking 00:14:09.744 ************************************ 00:14:09.744 14:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:09.744 14:27:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:09.744 14:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:09.744 14:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.744 14:27:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.006 ************************************ 00:14:10.006 START TEST nvmf_nvme_cli 00:14:10.006 ************************************ 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:10.006 * Looking for test storage... 00:14:10.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:10.006 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:10.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.007 --rc genhtml_branch_coverage=1 00:14:10.007 --rc genhtml_function_coverage=1 00:14:10.007 --rc genhtml_legend=1 00:14:10.007 --rc geninfo_all_blocks=1 00:14:10.007 --rc geninfo_unexecuted_blocks=1 00:14:10.007 00:14:10.007 ' 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:10.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.007 --rc genhtml_branch_coverage=1 00:14:10.007 --rc genhtml_function_coverage=1 00:14:10.007 --rc genhtml_legend=1 00:14:10.007 --rc geninfo_all_blocks=1 00:14:10.007 --rc geninfo_unexecuted_blocks=1 00:14:10.007 00:14:10.007 ' 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:10.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.007 --rc genhtml_branch_coverage=1 00:14:10.007 --rc genhtml_function_coverage=1 00:14:10.007 --rc genhtml_legend=1 00:14:10.007 --rc geninfo_all_blocks=1 00:14:10.007 --rc geninfo_unexecuted_blocks=1 00:14:10.007 00:14:10.007 ' 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:10.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.007 --rc genhtml_branch_coverage=1 00:14:10.007 --rc genhtml_function_coverage=1 00:14:10.007 --rc genhtml_legend=1 00:14:10.007 --rc geninfo_all_blocks=1 00:14:10.007 --rc geninfo_unexecuted_blocks=1 00:14:10.007 00:14:10.007 ' 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:10.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:10.007 14:27:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:18.152 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:18.152 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:18.152 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:18.153 Found net devices under 0000:31:00.0: cvl_0_0 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:18.153 Found net devices under 0000:31:00.1: cvl_0_1 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.153 14:27:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:18.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:14:18.153 00:14:18.153 --- 10.0.0.2 ping statistics --- 00:14:18.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.153 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:14:18.153 00:14:18.153 --- 10.0.0.1 ping statistics --- 00:14:18.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.153 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=3338217 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 3338217 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3338217 ']' 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.153 14:27:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.153 [2024-10-14 14:27:58.345339] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:14:18.153 [2024-10-14 14:27:58.345406] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.153 [2024-10-14 14:27:58.419463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:18.153 [2024-10-14 14:27:58.463893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.153 [2024-10-14 14:27:58.463931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.153 [2024-10-14 14:27:58.463939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.153 [2024-10-14 14:27:58.463946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.153 [2024-10-14 14:27:58.463952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.153 [2024-10-14 14:27:58.465630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.153 [2024-10-14 14:27:58.465749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.153 [2024-10-14 14:27:58.465908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.153 [2024-10-14 14:27:58.465909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.724 [2024-10-14 14:27:59.200859] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.724 Malloc0 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.724 Malloc1 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.724 [2024-10-14 14:27:59.307834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.724 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:14:18.984 00:14:18.984 Discovery Log Number of Records 2, Generation counter 2 00:14:18.984 =====Discovery Log Entry 0====== 00:14:18.984 trtype: tcp 00:14:18.984 adrfam: ipv4 00:14:18.984 subtype: current discovery subsystem 00:14:18.984 treq: not required 00:14:18.984 portid: 0 00:14:18.985 trsvcid: 4420 00:14:18.985 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:18.985 traddr: 10.0.0.2 00:14:18.985 eflags: explicit discovery connections, duplicate discovery information 00:14:18.985 sectype: none 00:14:18.985 =====Discovery Log Entry 1====== 00:14:18.985 trtype: tcp 00:14:18.985 adrfam: ipv4 00:14:18.985 subtype: nvme subsystem 00:14:18.985 treq: not required 00:14:18.985 portid: 0 00:14:18.985 trsvcid: 4420 00:14:18.985 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:18.985 traddr: 10.0.0.2 00:14:18.985 eflags: none 00:14:18.985 sectype: none 00:14:18.985 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:18.985 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:18.985 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:18.985 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:18.985 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:18.985 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:18.985 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:18.985 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:18.985 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:18.985 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:18.985 14:27:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:20.972 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:20.972 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:20.972 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.972 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:20.972 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:20.972 14:28:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:22.411 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:22.411 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:22.411 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:22.411 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:22.411 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:22.411 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:22.411 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:22.411 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:22.672 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:22.672 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:22.672 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:22.672 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:22.673 /dev/nvme0n2 ]] 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:22.673 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:22.673 rmmod nvme_tcp 00:14:22.673 rmmod nvme_fabrics 00:14:22.934 rmmod nvme_keyring 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 3338217 ']' 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 3338217 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3338217 ']' 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3338217 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3338217 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3338217' 00:14:22.934 killing process with pid 3338217 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3338217 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3338217 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:14:22.934 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:23.195 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:23.195 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:23.195 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.195 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.195 14:28:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.107 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:25.107 00:14:25.107 real 0m15.257s 00:14:25.107 user 0m22.985s 00:14:25.107 sys 0m6.320s 00:14:25.107 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:25.107 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.107 ************************************ 00:14:25.107 END TEST nvmf_nvme_cli 00:14:25.107 ************************************ 00:14:25.107 14:28:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:25.107 14:28:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:25.107 14:28:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:25.107 14:28:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:25.107 14:28:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:25.107 ************************************ 00:14:25.107 START TEST nvmf_vfio_user 00:14:25.107 ************************************ 00:14:25.107 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:25.369 * Looking for test storage... 00:14:25.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:25.369 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:25.369 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:25.369 14:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:25.369 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:25.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.370 --rc genhtml_branch_coverage=1 00:14:25.370 --rc genhtml_function_coverage=1 00:14:25.370 --rc genhtml_legend=1 00:14:25.370 --rc geninfo_all_blocks=1 00:14:25.370 --rc geninfo_unexecuted_blocks=1 00:14:25.370 00:14:25.370 ' 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:25.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.370 --rc genhtml_branch_coverage=1 00:14:25.370 --rc genhtml_function_coverage=1 00:14:25.370 --rc genhtml_legend=1 00:14:25.370 --rc geninfo_all_blocks=1 00:14:25.370 --rc geninfo_unexecuted_blocks=1 00:14:25.370 00:14:25.370 ' 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:25.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.370 --rc genhtml_branch_coverage=1 00:14:25.370 --rc genhtml_function_coverage=1 00:14:25.370 --rc genhtml_legend=1 00:14:25.370 --rc geninfo_all_blocks=1 00:14:25.370 --rc geninfo_unexecuted_blocks=1 00:14:25.370 00:14:25.370 ' 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:25.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.370 --rc genhtml_branch_coverage=1 00:14:25.370 --rc genhtml_function_coverage=1 00:14:25.370 --rc genhtml_legend=1 00:14:25.370 --rc geninfo_all_blocks=1 00:14:25.370 --rc geninfo_unexecuted_blocks=1 00:14:25.370 00:14:25.370 ' 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:25.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3340007 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3340007' 00:14:25.370 Process pid: 3340007 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3340007 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3340007 ']' 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.370 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:25.632 [2024-10-14 14:28:06.126058] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:14:25.632 [2024-10-14 14:28:06.126115] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.632 [2024-10-14 14:28:06.191290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.632 [2024-10-14 14:28:06.229157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.632 [2024-10-14 14:28:06.229191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.632 [2024-10-14 14:28:06.229199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.632 [2024-10-14 14:28:06.229205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.632 [2024-10-14 14:28:06.229212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.632 [2024-10-14 14:28:06.230769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.632 [2024-10-14 14:28:06.230884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.632 [2024-10-14 14:28:06.231038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.632 [2024-10-14 14:28:06.231038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.203 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.203 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:26.203 14:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:27.590 14:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:27.590 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:27.590 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:27.590 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:27.590 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:27.590 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:27.590 Malloc1 00:14:27.850 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:27.850 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:28.111 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:28.371 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:28.371 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:28.371 14:28:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:28.371 Malloc2 00:14:28.371 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:28.632 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:28.892 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:28.892 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:28.892 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:28.892 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:28.892 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:28.892 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:28.892 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:29.155 [2024-10-14 14:28:09.641444] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:14:29.155 [2024-10-14 14:28:09.641484] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3340704 ] 00:14:29.156 [2024-10-14 14:28:09.672688] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:29.156 [2024-10-14 14:28:09.681334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:29.156 [2024-10-14 14:28:09.681357] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7eff53525000 00:14:29.156 [2024-10-14 14:28:09.682333] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:29.156 [2024-10-14 14:28:09.683327] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:29.156 [2024-10-14 14:28:09.684326] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:29.156 [2024-10-14 14:28:09.685338] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:29.156 [2024-10-14 14:28:09.686341] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:29.156 [2024-10-14 14:28:09.687350] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:29.156 [2024-10-14 14:28:09.688353] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:29.156 [2024-10-14 14:28:09.689362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:29.156 [2024-10-14 14:28:09.690376] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:29.156 [2024-10-14 14:28:09.690386] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7eff5351a000 00:14:29.156 [2024-10-14 14:28:09.691713] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:29.156 [2024-10-14 14:28:09.708631] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:29.156 [2024-10-14 14:28:09.708658] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:29.156 [2024-10-14 14:28:09.713503] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:29.156 [2024-10-14 14:28:09.713553] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:29.156 [2024-10-14 14:28:09.713643] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:29.156 [2024-10-14 14:28:09.713662] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:29.156 [2024-10-14 14:28:09.713668] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:29.156 [2024-10-14 14:28:09.714505] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:29.156 [2024-10-14 14:28:09.714515] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:29.156 [2024-10-14 14:28:09.714523] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:29.156 [2024-10-14 14:28:09.715511] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:29.156 [2024-10-14 14:28:09.715519] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:29.156 [2024-10-14 14:28:09.715531] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:29.156 [2024-10-14 14:28:09.716511] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:29.156 [2024-10-14 14:28:09.716520] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:29.156 [2024-10-14 14:28:09.717517] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:29.156 [2024-10-14 14:28:09.717526] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:29.156 [2024-10-14 14:28:09.717531] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:29.156 [2024-10-14 14:28:09.717538] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:29.156 [2024-10-14 14:28:09.717644] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:29.156 [2024-10-14 14:28:09.717649] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:29.156 [2024-10-14 14:28:09.717654] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:29.156 [2024-10-14 14:28:09.718523] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:29.156 [2024-10-14 14:28:09.719522] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:29.156 [2024-10-14 14:28:09.720534] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:29.156 [2024-10-14 14:28:09.721529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:29.156 [2024-10-14 14:28:09.721594] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:29.156 [2024-10-14 14:28:09.722547] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:29.156 [2024-10-14 14:28:09.722555] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:29.156 [2024-10-14 14:28:09.722560] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:29.156 [2024-10-14 14:28:09.722582] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:29.156 [2024-10-14 14:28:09.722590] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:29.156 [2024-10-14 14:28:09.722607] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:29.156 [2024-10-14 14:28:09.722612] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:29.156 [2024-10-14 14:28:09.722616] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:29.156 [2024-10-14 14:28:09.722630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:29.156 [2024-10-14 14:28:09.722667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:29.156 [2024-10-14 14:28:09.722679] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:29.156 [2024-10-14 14:28:09.722685] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:29.156 [2024-10-14 14:28:09.722689] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:29.156 [2024-10-14 14:28:09.722694] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:29.156 [2024-10-14 14:28:09.722699] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:29.156 [2024-10-14 14:28:09.722703] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:29.156 [2024-10-14 14:28:09.722708] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:29.156 [2024-10-14 14:28:09.722716] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:29.156 [2024-10-14 14:28:09.722726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:29.156 [2024-10-14 14:28:09.722736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:29.156 [2024-10-14 14:28:09.722748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.156 [2024-10-14 14:28:09.722757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.156 [2024-10-14 14:28:09.722765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.156 [2024-10-14 14:28:09.722774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:29.156 [2024-10-14 14:28:09.722779] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:29.156 [2024-10-14 14:28:09.722788] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:29.156 [2024-10-14 14:28:09.722798] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:29.156 [2024-10-14 14:28:09.722805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:29.156 [2024-10-14 14:28:09.722811] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:29.156 [2024-10-14 14:28:09.722816] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:29.156 [2024-10-14 14:28:09.722823] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:29.156 [2024-10-14 14:28:09.722833] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:29.156 [2024-10-14 14:28:09.722842] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:29.156 [2024-10-14 14:28:09.722854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:29.156 [2024-10-14 14:28:09.722916] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:29.156 [2024-10-14 14:28:09.722926] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:29.156 [2024-10-14 14:28:09.722934] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:29.156 [2024-10-14 14:28:09.722938] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:29.156 [2024-10-14 14:28:09.722942] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:29.156 [2024-10-14 14:28:09.722948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:29.156 [2024-10-14 14:28:09.722958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:29.156 [2024-10-14 14:28:09.722967] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:29.156 [2024-10-14 14:28:09.722979] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:29.157 [2024-10-14 14:28:09.722987] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:29.157 [2024-10-14 14:28:09.722994] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:29.157 [2024-10-14 14:28:09.722999] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:29.157 [2024-10-14 14:28:09.723002] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:29.157 [2024-10-14 14:28:09.723008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:29.157 [2024-10-14 14:28:09.723026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:29.157 [2024-10-14 14:28:09.723039] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:29.157 [2024-10-14 14:28:09.723047] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:29.157 [2024-10-14 14:28:09.723054] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:29.157 [2024-10-14 14:28:09.723059] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:29.157 [2024-10-14 14:28:09.723067] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:29.157 [2024-10-14 14:28:09.723073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:29.157 [2024-10-14 14:28:09.723082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:29.157 [2024-10-14 14:28:09.723091] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:29.157 [2024-10-14 14:28:09.723098] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:29.157 [2024-10-14 14:28:09.723106] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:29.157 [2024-10-14 14:28:09.723112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:29.157 [2024-10-14 14:28:09.723117] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:29.157 [2024-10-14 14:28:09.723124] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:29.157 [2024-10-14 14:28:09.723130] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:29.157 [2024-10-14 14:28:09.723135] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:29.157 [2024-10-14 14:28:09.723140] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:29.157 [2024-10-14 14:28:09.723158] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:29.157 [2024-10-14 14:28:09.723165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:29.157 [2024-10-14 14:28:09.723177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:29.157 [2024-10-14 14:28:09.723187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:29.157 [2024-10-14 14:28:09.723198] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:29.157 [2024-10-14 14:28:09.723211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:29.157 [2024-10-14 14:28:09.723222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:29.157 [2024-10-14 14:28:09.723234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:29.157 [2024-10-14 14:28:09.723248] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:29.157 [2024-10-14 14:28:09.723253] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:29.157 [2024-10-14 14:28:09.723256] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:29.157 [2024-10-14 14:28:09.723260] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:29.157 [2024-10-14 14:28:09.723263] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:29.157 [2024-10-14 14:28:09.723270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:29.157 [2024-10-14 14:28:09.723277] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:29.157 [2024-10-14 14:28:09.723282] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:29.157 [2024-10-14 14:28:09.723285] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:29.157 [2024-10-14 14:28:09.723291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:29.157 [2024-10-14 14:28:09.723298] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:29.157 [2024-10-14 14:28:09.723303] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:29.157 [2024-10-14 14:28:09.723306] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:29.157 [2024-10-14 14:28:09.723312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:29.157 [2024-10-14 14:28:09.723320] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:29.157 [2024-10-14 14:28:09.723325] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:29.157 [2024-10-14 14:28:09.723330] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:29.157 [2024-10-14 14:28:09.723336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:29.157 [2024-10-14 14:28:09.723343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:29.157 [2024-10-14 14:28:09.723355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:29.157 [2024-10-14 14:28:09.723366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:29.157 [2024-10-14 14:28:09.723373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:29.157 ===================================================== 00:14:29.157 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:29.157 ===================================================== 00:14:29.157 Controller Capabilities/Features 00:14:29.157 ================================ 00:14:29.157 Vendor ID: 4e58 00:14:29.157 Subsystem Vendor ID: 4e58 00:14:29.157 Serial Number: SPDK1 00:14:29.157 Model Number: SPDK bdev Controller 00:14:29.157 Firmware Version: 25.01 00:14:29.157 Recommended Arb Burst: 6 00:14:29.157 IEEE OUI Identifier: 8d 6b 50 00:14:29.157 Multi-path I/O 00:14:29.157 May have multiple subsystem ports: Yes 00:14:29.157 May have multiple controllers: Yes 00:14:29.157 Associated with SR-IOV VF: No 00:14:29.157 Max Data Transfer Size: 131072 00:14:29.157 Max Number of Namespaces: 32 00:14:29.157 Max Number of I/O Queues: 127 00:14:29.157 NVMe Specification Version (VS): 1.3 00:14:29.157 NVMe Specification Version (Identify): 1.3 00:14:29.157 Maximum Queue Entries: 256 00:14:29.157 Contiguous Queues Required: Yes 00:14:29.157 Arbitration Mechanisms Supported 00:14:29.157 Weighted Round Robin: Not Supported 00:14:29.157 Vendor Specific: Not Supported 00:14:29.157 Reset Timeout: 15000 ms 00:14:29.157 Doorbell Stride: 4 bytes 00:14:29.157 NVM Subsystem Reset: Not Supported 00:14:29.157 Command Sets Supported 00:14:29.157 NVM Command Set: Supported 00:14:29.157 Boot Partition: Not Supported 00:14:29.157 Memory Page Size Minimum: 4096 bytes 00:14:29.157 Memory Page Size Maximum: 4096 bytes 00:14:29.157 Persistent Memory Region: Not Supported 00:14:29.157 Optional Asynchronous Events Supported 00:14:29.157 Namespace Attribute Notices: Supported 00:14:29.157 Firmware Activation Notices: Not Supported 00:14:29.157 ANA Change Notices: Not Supported 00:14:29.157 PLE Aggregate Log Change Notices: Not Supported 00:14:29.157 LBA Status Info Alert Notices: Not Supported 00:14:29.157 EGE Aggregate Log Change Notices: Not Supported 00:14:29.157 Normal NVM Subsystem Shutdown event: Not Supported 00:14:29.157 Zone Descriptor Change Notices: Not Supported 00:14:29.157 Discovery Log Change Notices: Not Supported 00:14:29.157 Controller Attributes 00:14:29.157 128-bit Host Identifier: Supported 00:14:29.157 Non-Operational Permissive Mode: Not Supported 00:14:29.157 NVM Sets: Not Supported 00:14:29.157 Read Recovery Levels: Not Supported 00:14:29.157 Endurance Groups: Not Supported 00:14:29.157 Predictable Latency Mode: Not Supported 00:14:29.157 Traffic Based Keep ALive: Not Supported 00:14:29.157 Namespace Granularity: Not Supported 00:14:29.157 SQ Associations: Not Supported 00:14:29.157 UUID List: Not Supported 00:14:29.157 Multi-Domain Subsystem: Not Supported 00:14:29.157 Fixed Capacity Management: Not Supported 00:14:29.157 Variable Capacity Management: Not Supported 00:14:29.157 Delete Endurance Group: Not Supported 00:14:29.157 Delete NVM Set: Not Supported 00:14:29.157 Extended LBA Formats Supported: Not Supported 00:14:29.157 Flexible Data Placement Supported: Not Supported 00:14:29.157 00:14:29.157 Controller Memory Buffer Support 00:14:29.157 ================================ 00:14:29.157 Supported: No 00:14:29.157 00:14:29.157 Persistent Memory Region Support 00:14:29.157 ================================ 00:14:29.157 Supported: No 00:14:29.157 00:14:29.157 Admin Command Set Attributes 00:14:29.157 ============================ 00:14:29.157 Security Send/Receive: Not Supported 00:14:29.157 Format NVM: Not Supported 00:14:29.157 Firmware Activate/Download: Not Supported 00:14:29.157 Namespace Management: Not Supported 00:14:29.157 Device Self-Test: Not Supported 00:14:29.157 Directives: Not Supported 00:14:29.157 NVMe-MI: Not Supported 00:14:29.157 Virtualization Management: Not Supported 00:14:29.157 Doorbell Buffer Config: Not Supported 00:14:29.157 Get LBA Status Capability: Not Supported 00:14:29.157 Command & Feature Lockdown Capability: Not Supported 00:14:29.158 Abort Command Limit: 4 00:14:29.158 Async Event Request Limit: 4 00:14:29.158 Number of Firmware Slots: N/A 00:14:29.158 Firmware Slot 1 Read-Only: N/A 00:14:29.158 Firmware Activation Without Reset: N/A 00:14:29.158 Multiple Update Detection Support: N/A 00:14:29.158 Firmware Update Granularity: No Information Provided 00:14:29.158 Per-Namespace SMART Log: No 00:14:29.158 Asymmetric Namespace Access Log Page: Not Supported 00:14:29.158 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:29.158 Command Effects Log Page: Supported 00:14:29.158 Get Log Page Extended Data: Supported 00:14:29.158 Telemetry Log Pages: Not Supported 00:14:29.158 Persistent Event Log Pages: Not Supported 00:14:29.158 Supported Log Pages Log Page: May Support 00:14:29.158 Commands Supported & Effects Log Page: Not Supported 00:14:29.158 Feature Identifiers & Effects Log Page:May Support 00:14:29.158 NVMe-MI Commands & Effects Log Page: May Support 00:14:29.158 Data Area 4 for Telemetry Log: Not Supported 00:14:29.158 Error Log Page Entries Supported: 128 00:14:29.158 Keep Alive: Supported 00:14:29.158 Keep Alive Granularity: 10000 ms 00:14:29.158 00:14:29.158 NVM Command Set Attributes 00:14:29.158 ========================== 00:14:29.158 Submission Queue Entry Size 00:14:29.158 Max: 64 00:14:29.158 Min: 64 00:14:29.158 Completion Queue Entry Size 00:14:29.158 Max: 16 00:14:29.158 Min: 16 00:14:29.158 Number of Namespaces: 32 00:14:29.158 Compare Command: Supported 00:14:29.158 Write Uncorrectable Command: Not Supported 00:14:29.158 Dataset Management Command: Supported 00:14:29.158 Write Zeroes Command: Supported 00:14:29.158 Set Features Save Field: Not Supported 00:14:29.158 Reservations: Not Supported 00:14:29.158 Timestamp: Not Supported 00:14:29.158 Copy: Supported 00:14:29.158 Volatile Write Cache: Present 00:14:29.158 Atomic Write Unit (Normal): 1 00:14:29.158 Atomic Write Unit (PFail): 1 00:14:29.158 Atomic Compare & Write Unit: 1 00:14:29.158 Fused Compare & Write: Supported 00:14:29.158 Scatter-Gather List 00:14:29.158 SGL Command Set: Supported (Dword aligned) 00:14:29.158 SGL Keyed: Not Supported 00:14:29.158 SGL Bit Bucket Descriptor: Not Supported 00:14:29.158 SGL Metadata Pointer: Not Supported 00:14:29.158 Oversized SGL: Not Supported 00:14:29.158 SGL Metadata Address: Not Supported 00:14:29.158 SGL Offset: Not Supported 00:14:29.158 Transport SGL Data Block: Not Supported 00:14:29.158 Replay Protected Memory Block: Not Supported 00:14:29.158 00:14:29.158 Firmware Slot Information 00:14:29.158 ========================= 00:14:29.158 Active slot: 1 00:14:29.158 Slot 1 Firmware Revision: 25.01 00:14:29.158 00:14:29.158 00:14:29.158 Commands Supported and Effects 00:14:29.158 ============================== 00:14:29.158 Admin Commands 00:14:29.158 -------------- 00:14:29.158 Get Log Page (02h): Supported 00:14:29.158 Identify (06h): Supported 00:14:29.158 Abort (08h): Supported 00:14:29.158 Set Features (09h): Supported 00:14:29.158 Get Features (0Ah): Supported 00:14:29.158 Asynchronous Event Request (0Ch): Supported 00:14:29.158 Keep Alive (18h): Supported 00:14:29.158 I/O Commands 00:14:29.158 ------------ 00:14:29.158 Flush (00h): Supported LBA-Change 00:14:29.158 Write (01h): Supported LBA-Change 00:14:29.158 Read (02h): Supported 00:14:29.158 Compare (05h): Supported 00:14:29.158 Write Zeroes (08h): Supported LBA-Change 00:14:29.158 Dataset Management (09h): Supported LBA-Change 00:14:29.158 Copy (19h): Supported LBA-Change 00:14:29.158 00:14:29.158 Error Log 00:14:29.158 ========= 00:14:29.158 00:14:29.158 Arbitration 00:14:29.158 =========== 00:14:29.158 Arbitration Burst: 1 00:14:29.158 00:14:29.158 Power Management 00:14:29.158 ================ 00:14:29.158 Number of Power States: 1 00:14:29.158 Current Power State: Power State #0 00:14:29.158 Power State #0: 00:14:29.158 Max Power: 0.00 W 00:14:29.158 Non-Operational State: Operational 00:14:29.158 Entry Latency: Not Reported 00:14:29.158 Exit Latency: Not Reported 00:14:29.158 Relative Read Throughput: 0 00:14:29.158 Relative Read Latency: 0 00:14:29.158 Relative Write Throughput: 0 00:14:29.158 Relative Write Latency: 0 00:14:29.158 Idle Power: Not Reported 00:14:29.158 Active Power: Not Reported 00:14:29.158 Non-Operational Permissive Mode: Not Supported 00:14:29.158 00:14:29.158 Health Information 00:14:29.158 ================== 00:14:29.158 Critical Warnings: 00:14:29.158 Available Spare Space: OK 00:14:29.158 Temperature: OK 00:14:29.158 Device Reliability: OK 00:14:29.158 Read Only: No 00:14:29.158 Volatile Memory Backup: OK 00:14:29.158 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:29.158 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:29.158 Available Spare: 0% 00:14:29.158 Available Sp[2024-10-14 14:28:09.723474] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:29.158 [2024-10-14 14:28:09.723483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:29.158 [2024-10-14 14:28:09.723512] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:29.158 [2024-10-14 14:28:09.723522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-10-14 14:28:09.723529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-10-14 14:28:09.723536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-10-14 14:28:09.723542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:29.158 [2024-10-14 14:28:09.727069] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:29.158 [2024-10-14 14:28:09.727081] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:29.158 [2024-10-14 14:28:09.727575] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:29.158 [2024-10-14 14:28:09.727616] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:29.158 [2024-10-14 14:28:09.727622] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:29.158 [2024-10-14 14:28:09.728586] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:29.158 [2024-10-14 14:28:09.728597] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:29.158 [2024-10-14 14:28:09.728658] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:29.158 [2024-10-14 14:28:09.730608] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:29.158 are Threshold: 0% 00:14:29.158 Life Percentage Used: 0% 00:14:29.158 Data Units Read: 0 00:14:29.158 Data Units Written: 0 00:14:29.158 Host Read Commands: 0 00:14:29.158 Host Write Commands: 0 00:14:29.158 Controller Busy Time: 0 minutes 00:14:29.158 Power Cycles: 0 00:14:29.158 Power On Hours: 0 hours 00:14:29.158 Unsafe Shutdowns: 0 00:14:29.158 Unrecoverable Media Errors: 0 00:14:29.158 Lifetime Error Log Entries: 0 00:14:29.158 Warning Temperature Time: 0 minutes 00:14:29.158 Critical Temperature Time: 0 minutes 00:14:29.158 00:14:29.158 Number of Queues 00:14:29.158 ================ 00:14:29.158 Number of I/O Submission Queues: 127 00:14:29.158 Number of I/O Completion Queues: 127 00:14:29.158 00:14:29.158 Active Namespaces 00:14:29.158 ================= 00:14:29.158 Namespace ID:1 00:14:29.158 Error Recovery Timeout: Unlimited 00:14:29.158 Command Set Identifier: NVM (00h) 00:14:29.158 Deallocate: Supported 00:14:29.158 Deallocated/Unwritten Error: Not Supported 00:14:29.158 Deallocated Read Value: Unknown 00:14:29.158 Deallocate in Write Zeroes: Not Supported 00:14:29.158 Deallocated Guard Field: 0xFFFF 00:14:29.158 Flush: Supported 00:14:29.158 Reservation: Supported 00:14:29.158 Namespace Sharing Capabilities: Multiple Controllers 00:14:29.158 Size (in LBAs): 131072 (0GiB) 00:14:29.158 Capacity (in LBAs): 131072 (0GiB) 00:14:29.158 Utilization (in LBAs): 131072 (0GiB) 00:14:29.158 NGUID: C04E796C1FE24E97AFADDD5B9C0C717B 00:14:29.158 UUID: c04e796c-1fe2-4e97-afad-dd5b9c0c717b 00:14:29.158 Thin Provisioning: Not Supported 00:14:29.158 Per-NS Atomic Units: Yes 00:14:29.158 Atomic Boundary Size (Normal): 0 00:14:29.158 Atomic Boundary Size (PFail): 0 00:14:29.158 Atomic Boundary Offset: 0 00:14:29.158 Maximum Single Source Range Length: 65535 00:14:29.158 Maximum Copy Length: 65535 00:14:29.158 Maximum Source Range Count: 1 00:14:29.158 NGUID/EUI64 Never Reused: No 00:14:29.158 Namespace Write Protected: No 00:14:29.158 Number of LBA Formats: 1 00:14:29.158 Current LBA Format: LBA Format #00 00:14:29.158 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:29.158 00:14:29.158 14:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:29.419 [2024-10-14 14:28:09.913693] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:34.706 Initializing NVMe Controllers 00:14:34.706 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:34.706 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:34.706 Initialization complete. Launching workers. 00:14:34.706 ======================================================== 00:14:34.706 Latency(us) 00:14:34.706 Device Information : IOPS MiB/s Average min max 00:14:34.706 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39974.72 156.15 3201.69 853.53 6788.06 00:14:34.706 ======================================================== 00:14:34.706 Total : 39974.72 156.15 3201.69 853.53 6788.06 00:14:34.706 00:14:34.706 [2024-10-14 14:28:14.934018] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:34.706 14:28:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:34.706 [2024-10-14 14:28:15.113869] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:39.991 Initializing NVMe Controllers 00:14:39.991 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:39.991 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:39.991 Initialization complete. Launching workers. 00:14:39.991 ======================================================== 00:14:39.991 Latency(us) 00:14:39.991 Device Information : IOPS MiB/s Average min max 00:14:39.991 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15976.35 62.41 8017.41 6491.24 15965.06 00:14:39.991 ======================================================== 00:14:39.991 Total : 15976.35 62.41 8017.41 6491.24 15965.06 00:14:39.991 00:14:39.991 [2024-10-14 14:28:20.154046] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:39.991 14:28:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:39.991 [2024-10-14 14:28:20.342904] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:45.291 [2024-10-14 14:28:25.453436] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:45.291 Initializing NVMe Controllers 00:14:45.291 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:45.291 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:45.291 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:45.291 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:45.291 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:45.291 Initialization complete. Launching workers. 00:14:45.291 Starting thread on core 2 00:14:45.291 Starting thread on core 3 00:14:45.291 Starting thread on core 1 00:14:45.291 14:28:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:45.291 [2024-10-14 14:28:25.716439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:48.591 [2024-10-14 14:28:28.773578] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:48.592 Initializing NVMe Controllers 00:14:48.592 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:48.592 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:48.592 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:48.592 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:48.592 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:48.592 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:48.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:48.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:48.592 Initialization complete. Launching workers. 00:14:48.592 Starting thread on core 1 with urgent priority queue 00:14:48.592 Starting thread on core 2 with urgent priority queue 00:14:48.592 Starting thread on core 3 with urgent priority queue 00:14:48.592 Starting thread on core 0 with urgent priority queue 00:14:48.592 SPDK bdev Controller (SPDK1 ) core 0: 13059.00 IO/s 7.66 secs/100000 ios 00:14:48.592 SPDK bdev Controller (SPDK1 ) core 1: 12053.67 IO/s 8.30 secs/100000 ios 00:14:48.592 SPDK bdev Controller (SPDK1 ) core 2: 8744.67 IO/s 11.44 secs/100000 ios 00:14:48.592 SPDK bdev Controller (SPDK1 ) core 3: 13175.00 IO/s 7.59 secs/100000 ios 00:14:48.592 ======================================================== 00:14:48.592 00:14:48.592 14:28:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:48.592 [2024-10-14 14:28:29.045578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:48.592 Initializing NVMe Controllers 00:14:48.592 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:48.592 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:48.592 Namespace ID: 1 size: 0GB 00:14:48.592 Initialization complete. 00:14:48.592 INFO: using host memory buffer for IO 00:14:48.592 Hello world! 00:14:48.592 [2024-10-14 14:28:29.079768] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:48.592 14:28:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:48.852 [2024-10-14 14:28:29.347421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:49.794 Initializing NVMe Controllers 00:14:49.794 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:49.794 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:49.794 Initialization complete. Launching workers. 00:14:49.794 submit (in ns) avg, min, max = 8116.0, 3933.3, 7992409.2 00:14:49.794 complete (in ns) avg, min, max = 18581.0, 2381.7, 7988945.0 00:14:49.794 00:14:49.794 Submit histogram 00:14:49.794 ================ 00:14:49.794 Range in us Cumulative Count 00:14:49.794 3.920 - 3.947: 0.2595% ( 49) 00:14:49.794 3.947 - 3.973: 3.2521% ( 565) 00:14:49.794 3.973 - 4.000: 10.7574% ( 1417) 00:14:49.794 4.000 - 4.027: 21.6684% ( 2060) 00:14:49.794 4.027 - 4.053: 33.2892% ( 2194) 00:14:49.794 4.053 - 4.080: 46.4407% ( 2483) 00:14:49.795 4.080 - 4.107: 64.4386% ( 3398) 00:14:49.795 4.107 - 4.133: 80.0900% ( 2955) 00:14:49.795 4.133 - 4.160: 91.4301% ( 2141) 00:14:49.795 4.160 - 4.187: 96.4725% ( 952) 00:14:49.795 4.187 - 4.213: 98.4534% ( 374) 00:14:49.795 4.213 - 4.240: 99.2214% ( 145) 00:14:49.795 4.240 - 4.267: 99.4068% ( 35) 00:14:49.795 4.267 - 4.293: 99.4333% ( 5) 00:14:49.795 4.293 - 4.320: 99.4492% ( 3) 00:14:49.795 4.320 - 4.347: 99.4597% ( 2) 00:14:49.795 4.347 - 4.373: 99.4756% ( 3) 00:14:49.795 4.373 - 4.400: 99.4862% ( 2) 00:14:49.795 4.453 - 4.480: 99.4968% ( 2) 00:14:49.795 4.533 - 4.560: 99.5021% ( 1) 00:14:49.795 4.560 - 4.587: 99.5074% ( 1) 00:14:49.795 4.800 - 4.827: 99.5180% ( 2) 00:14:49.795 4.987 - 5.013: 99.5233% ( 1) 00:14:49.795 5.040 - 5.067: 99.5286% ( 1) 00:14:49.795 5.120 - 5.147: 99.5339% ( 1) 00:14:49.795 5.253 - 5.280: 99.5392% ( 1) 00:14:49.795 5.280 - 5.307: 99.5445% ( 1) 00:14:49.795 5.387 - 5.413: 99.5498% ( 1) 00:14:49.795 5.547 - 5.573: 99.5551% ( 1) 00:14:49.795 5.680 - 5.707: 99.5604% ( 1) 00:14:49.795 5.733 - 5.760: 99.5657% ( 1) 00:14:49.795 5.840 - 5.867: 99.5710% ( 1) 00:14:49.795 5.893 - 5.920: 99.5763% ( 1) 00:14:49.795 5.920 - 5.947: 99.5816% ( 1) 00:14:49.795 6.000 - 6.027: 99.5869% ( 1) 00:14:49.795 6.053 - 6.080: 99.5975% ( 2) 00:14:49.795 6.080 - 6.107: 99.6028% ( 1) 00:14:49.795 6.107 - 6.133: 99.6081% ( 1) 00:14:49.795 6.160 - 6.187: 99.6133% ( 1) 00:14:49.795 6.560 - 6.587: 99.6186% ( 1) 00:14:49.795 6.747 - 6.773: 99.6239% ( 1) 00:14:49.795 6.827 - 6.880: 99.6292% ( 1) 00:14:49.795 7.040 - 7.093: 99.6451% ( 3) 00:14:49.795 7.093 - 7.147: 99.6504% ( 1) 00:14:49.795 7.200 - 7.253: 99.6557% ( 1) 00:14:49.795 7.253 - 7.307: 99.6610% ( 1) 00:14:49.795 7.360 - 7.413: 99.6663% ( 1) 00:14:49.795 7.413 - 7.467: 99.6716% ( 1) 00:14:49.795 7.467 - 7.520: 99.6822% ( 2) 00:14:49.795 7.573 - 7.627: 99.6928% ( 2) 00:14:49.795 7.627 - 7.680: 99.6981% ( 1) 00:14:49.795 7.733 - 7.787: 99.7034% ( 1) 00:14:49.795 7.787 - 7.840: 99.7087% ( 1) 00:14:49.795 7.893 - 7.947: 99.7193% ( 2) 00:14:49.795 8.107 - 8.160: 99.7352% ( 3) 00:14:49.795 8.160 - 8.213: 99.7511% ( 3) 00:14:49.795 8.213 - 8.267: 99.7617% ( 2) 00:14:49.795 8.373 - 8.427: 99.7722% ( 2) 00:14:49.795 8.480 - 8.533: 99.7775% ( 1) 00:14:49.795 8.533 - 8.587: 99.7828% ( 1) 00:14:49.795 8.640 - 8.693: 99.7934% ( 2) 00:14:49.795 8.693 - 8.747: 99.7987% ( 1) 00:14:49.795 8.800 - 8.853: 99.8040% ( 1) 00:14:49.795 8.853 - 8.907: 99.8199% ( 3) 00:14:49.795 8.907 - 8.960: 99.8305% ( 2) 00:14:49.795 8.960 - 9.013: 99.8464% ( 3) 00:14:49.795 9.173 - 9.227: 99.8517% ( 1) 00:14:49.795 9.333 - 9.387: 99.8570% ( 1) 00:14:49.795 9.387 - 9.440: 99.8623% ( 1) 00:14:49.795 9.653 - 9.707: 99.8676% ( 1) 00:14:49.795 9.707 - 9.760: 99.8782% ( 2) 00:14:49.795 10.027 - 10.080: 99.8835% ( 1) 00:14:49.795 10.187 - 10.240: 99.8888% ( 1) 00:14:49.795 11.787 - 11.840: 99.8941% ( 1) 00:14:49.795 14.507 - 14.613: 99.8994% ( 1) 00:14:49.795 15.253 - 15.360: 99.9047% ( 1) 00:14:49.795 3986.773 - 4014.080: 99.9947% ( 17) 00:14:49.795 7973.547 - 8028.160: 100.0000% ( 1) 00:14:49.795 00:14:49.795 Complete histogram 00:14:49.795 ================== 00:14:49.795 Ra[2024-10-14 14:28:30.362876] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:49.795 nge in us Cumulative Count 00:14:49.795 2.373 - 2.387: 0.0053% ( 1) 00:14:49.795 2.387 - 2.400: 0.7097% ( 133) 00:14:49.795 2.400 - 2.413: 0.8951% ( 35) 00:14:49.795 2.413 - 2.427: 0.9905% ( 18) 00:14:49.795 2.427 - 2.440: 1.0752% ( 16) 00:14:49.795 2.440 - 2.453: 38.3422% ( 7036) 00:14:49.795 2.453 - 2.467: 58.2627% ( 3761) 00:14:49.795 2.467 - 2.480: 70.4449% ( 2300) 00:14:49.795 2.480 - 2.493: 77.6960% ( 1369) 00:14:49.795 2.493 - 2.507: 80.4078% ( 512) 00:14:49.795 2.507 - 2.520: 83.2415% ( 535) 00:14:49.795 2.520 - 2.533: 89.4227% ( 1167) 00:14:49.795 2.533 - 2.547: 94.4333% ( 946) 00:14:49.795 2.547 - 2.560: 96.9915% ( 483) 00:14:49.795 2.560 - 2.573: 98.4746% ( 280) 00:14:49.795 2.573 - 2.587: 99.0678% ( 112) 00:14:49.795 2.587 - 2.600: 99.2214% ( 29) 00:14:49.795 2.600 - 2.613: 99.2744% ( 10) 00:14:49.795 2.613 - 2.627: 99.2903% ( 3) 00:14:49.795 2.627 - 2.640: 99.2956% ( 1) 00:14:49.795 2.653 - 2.667: 99.3008% ( 1) 00:14:49.795 2.720 - 2.733: 99.3061% ( 1) 00:14:49.795 2.840 - 2.853: 99.3114% ( 1) 00:14:49.795 2.880 - 2.893: 99.3167% ( 1) 00:14:49.795 3.040 - 3.053: 99.3220% ( 1) 00:14:49.795 5.467 - 5.493: 99.3273% ( 1) 00:14:49.795 5.547 - 5.573: 99.3326% ( 1) 00:14:49.795 5.573 - 5.600: 99.3379% ( 1) 00:14:49.795 5.600 - 5.627: 99.3432% ( 1) 00:14:49.795 5.787 - 5.813: 99.3485% ( 1) 00:14:49.795 5.813 - 5.840: 99.3591% ( 2) 00:14:49.795 5.867 - 5.893: 99.3750% ( 3) 00:14:49.795 5.920 - 5.947: 99.3803% ( 1) 00:14:49.795 5.947 - 5.973: 99.3856% ( 1) 00:14:49.795 6.027 - 6.053: 99.3962% ( 2) 00:14:49.795 6.080 - 6.107: 99.4015% ( 1) 00:14:49.795 6.107 - 6.133: 99.4121% ( 2) 00:14:49.795 6.187 - 6.213: 99.4174% ( 1) 00:14:49.795 6.293 - 6.320: 99.4227% ( 1) 00:14:49.795 6.320 - 6.347: 99.4280% ( 1) 00:14:49.795 6.373 - 6.400: 99.4333% ( 1) 00:14:49.795 6.400 - 6.427: 99.4386% ( 1) 00:14:49.795 6.427 - 6.453: 99.4492% ( 2) 00:14:49.795 6.453 - 6.480: 99.4544% ( 1) 00:14:49.795 6.480 - 6.507: 99.4597% ( 1) 00:14:49.795 6.560 - 6.587: 99.4650% ( 1) 00:14:49.795 6.640 - 6.667: 99.4703% ( 1) 00:14:49.795 6.720 - 6.747: 99.4756% ( 1) 00:14:49.795 6.747 - 6.773: 99.4862% ( 2) 00:14:49.795 6.773 - 6.800: 99.4968% ( 2) 00:14:49.795 6.827 - 6.880: 99.5021% ( 1) 00:14:49.795 6.880 - 6.933: 99.5074% ( 1) 00:14:49.795 7.040 - 7.093: 99.5127% ( 1) 00:14:49.795 7.147 - 7.200: 99.5180% ( 1) 00:14:49.795 7.253 - 7.307: 99.5233% ( 1) 00:14:49.795 7.307 - 7.360: 99.5286% ( 1) 00:14:49.795 7.413 - 7.467: 99.5339% ( 1) 00:14:49.795 7.520 - 7.573: 99.5392% ( 1) 00:14:49.795 7.680 - 7.733: 99.5445% ( 1) 00:14:49.795 7.733 - 7.787: 99.5498% ( 1) 00:14:49.795 7.893 - 7.947: 99.5551% ( 1) 00:14:49.795 8.160 - 8.213: 99.5657% ( 2) 00:14:49.795 8.427 - 8.480: 99.5710% ( 1) 00:14:49.795 8.480 - 8.533: 99.5763% ( 1) 00:14:49.795 14.080 - 14.187: 99.5816% ( 1) 00:14:49.795 14.293 - 14.400: 99.5869% ( 1) 00:14:49.795 14.720 - 14.827: 99.5922% ( 1) 00:14:49.795 15.573 - 15.680: 99.5975% ( 1) 00:14:49.795 162.133 - 162.987: 99.6028% ( 1) 00:14:49.795 3986.773 - 4014.080: 99.9947% ( 74) 00:14:49.795 7973.547 - 8028.160: 100.0000% ( 1) 00:14:49.795 00:14:49.795 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:49.795 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:49.795 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:49.795 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:49.795 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:50.057 [ 00:14:50.057 { 00:14:50.057 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:50.057 "subtype": "Discovery", 00:14:50.057 "listen_addresses": [], 00:14:50.057 "allow_any_host": true, 00:14:50.057 "hosts": [] 00:14:50.057 }, 00:14:50.057 { 00:14:50.057 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:50.057 "subtype": "NVMe", 00:14:50.057 "listen_addresses": [ 00:14:50.057 { 00:14:50.057 "trtype": "VFIOUSER", 00:14:50.057 "adrfam": "IPv4", 00:14:50.057 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:50.057 "trsvcid": "0" 00:14:50.057 } 00:14:50.057 ], 00:14:50.057 "allow_any_host": true, 00:14:50.057 "hosts": [], 00:14:50.057 "serial_number": "SPDK1", 00:14:50.057 "model_number": "SPDK bdev Controller", 00:14:50.057 "max_namespaces": 32, 00:14:50.057 "min_cntlid": 1, 00:14:50.057 "max_cntlid": 65519, 00:14:50.057 "namespaces": [ 00:14:50.057 { 00:14:50.057 "nsid": 1, 00:14:50.057 "bdev_name": "Malloc1", 00:14:50.057 "name": "Malloc1", 00:14:50.057 "nguid": "C04E796C1FE24E97AFADDD5B9C0C717B", 00:14:50.057 "uuid": "c04e796c-1fe2-4e97-afad-dd5b9c0c717b" 00:14:50.057 } 00:14:50.057 ] 00:14:50.057 }, 00:14:50.057 { 00:14:50.057 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:50.057 "subtype": "NVMe", 00:14:50.057 "listen_addresses": [ 00:14:50.057 { 00:14:50.057 "trtype": "VFIOUSER", 00:14:50.057 "adrfam": "IPv4", 00:14:50.057 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:50.057 "trsvcid": "0" 00:14:50.057 } 00:14:50.057 ], 00:14:50.057 "allow_any_host": true, 00:14:50.057 "hosts": [], 00:14:50.057 "serial_number": "SPDK2", 00:14:50.057 "model_number": "SPDK bdev Controller", 00:14:50.057 "max_namespaces": 32, 00:14:50.057 "min_cntlid": 1, 00:14:50.057 "max_cntlid": 65519, 00:14:50.057 "namespaces": [ 00:14:50.057 { 00:14:50.057 "nsid": 1, 00:14:50.057 "bdev_name": "Malloc2", 00:14:50.057 "name": "Malloc2", 00:14:50.057 "nguid": "C0A6196F966A49A6A112EEB76BC9AE4A", 00:14:50.057 "uuid": "c0a6196f-966a-49a6-a112-eeb76bc9ae4a" 00:14:50.057 } 00:14:50.057 ] 00:14:50.057 } 00:14:50.057 ] 00:14:50.057 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:50.057 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:50.057 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3344729 00:14:50.057 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:50.057 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:50.057 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:50.057 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:50.057 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:50.057 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:50.057 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:50.057 [2024-10-14 14:28:30.770470] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:50.057 Malloc3 00:14:50.318 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:50.318 [2024-10-14 14:28:30.948670] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:50.318 14:28:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:50.318 Asynchronous Event Request test 00:14:50.318 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:50.318 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:50.318 Registering asynchronous event callbacks... 00:14:50.318 Starting namespace attribute notice tests for all controllers... 00:14:50.318 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:50.318 aer_cb - Changed Namespace 00:14:50.318 Cleaning up... 00:14:50.580 [ 00:14:50.580 { 00:14:50.580 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:50.580 "subtype": "Discovery", 00:14:50.580 "listen_addresses": [], 00:14:50.580 "allow_any_host": true, 00:14:50.580 "hosts": [] 00:14:50.580 }, 00:14:50.580 { 00:14:50.580 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:50.580 "subtype": "NVMe", 00:14:50.580 "listen_addresses": [ 00:14:50.580 { 00:14:50.580 "trtype": "VFIOUSER", 00:14:50.580 "adrfam": "IPv4", 00:14:50.580 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:50.580 "trsvcid": "0" 00:14:50.580 } 00:14:50.580 ], 00:14:50.580 "allow_any_host": true, 00:14:50.580 "hosts": [], 00:14:50.580 "serial_number": "SPDK1", 00:14:50.580 "model_number": "SPDK bdev Controller", 00:14:50.580 "max_namespaces": 32, 00:14:50.580 "min_cntlid": 1, 00:14:50.580 "max_cntlid": 65519, 00:14:50.580 "namespaces": [ 00:14:50.580 { 00:14:50.580 "nsid": 1, 00:14:50.580 "bdev_name": "Malloc1", 00:14:50.580 "name": "Malloc1", 00:14:50.580 "nguid": "C04E796C1FE24E97AFADDD5B9C0C717B", 00:14:50.580 "uuid": "c04e796c-1fe2-4e97-afad-dd5b9c0c717b" 00:14:50.580 }, 00:14:50.580 { 00:14:50.580 "nsid": 2, 00:14:50.580 "bdev_name": "Malloc3", 00:14:50.580 "name": "Malloc3", 00:14:50.580 "nguid": "429F6E3CF4C64CCC9D170F667D3FB7D4", 00:14:50.580 "uuid": "429f6e3c-f4c6-4ccc-9d17-0f667d3fb7d4" 00:14:50.580 } 00:14:50.580 ] 00:14:50.580 }, 00:14:50.580 { 00:14:50.580 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:50.580 "subtype": "NVMe", 00:14:50.580 "listen_addresses": [ 00:14:50.580 { 00:14:50.580 "trtype": "VFIOUSER", 00:14:50.580 "adrfam": "IPv4", 00:14:50.580 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:50.580 "trsvcid": "0" 00:14:50.580 } 00:14:50.580 ], 00:14:50.580 "allow_any_host": true, 00:14:50.580 "hosts": [], 00:14:50.580 "serial_number": "SPDK2", 00:14:50.580 "model_number": "SPDK bdev Controller", 00:14:50.580 "max_namespaces": 32, 00:14:50.580 "min_cntlid": 1, 00:14:50.580 "max_cntlid": 65519, 00:14:50.580 "namespaces": [ 00:14:50.580 { 00:14:50.580 "nsid": 1, 00:14:50.580 "bdev_name": "Malloc2", 00:14:50.580 "name": "Malloc2", 00:14:50.580 "nguid": "C0A6196F966A49A6A112EEB76BC9AE4A", 00:14:50.580 "uuid": "c0a6196f-966a-49a6-a112-eeb76bc9ae4a" 00:14:50.580 } 00:14:50.580 ] 00:14:50.580 } 00:14:50.580 ] 00:14:50.580 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3344729 00:14:50.580 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:50.580 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:50.580 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:50.580 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:50.580 [2024-10-14 14:28:31.178964] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:14:50.580 [2024-10-14 14:28:31.179007] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344897 ] 00:14:50.580 [2024-10-14 14:28:31.210647] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:50.580 [2024-10-14 14:28:31.219285] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:50.580 [2024-10-14 14:28:31.219308] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f88d8b8c000 00:14:50.580 [2024-10-14 14:28:31.220279] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.580 [2024-10-14 14:28:31.221284] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.580 [2024-10-14 14:28:31.222289] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.580 [2024-10-14 14:28:31.223294] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:50.580 [2024-10-14 14:28:31.224300] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:50.580 [2024-10-14 14:28:31.225308] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.580 [2024-10-14 14:28:31.226314] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:50.580 [2024-10-14 14:28:31.227323] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.580 [2024-10-14 14:28:31.228335] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:50.580 [2024-10-14 14:28:31.228350] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f88d8b81000 00:14:50.580 [2024-10-14 14:28:31.229684] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:50.580 [2024-10-14 14:28:31.245909] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:50.580 [2024-10-14 14:28:31.245939] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:50.580 [2024-10-14 14:28:31.251017] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:50.580 [2024-10-14 14:28:31.251066] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:50.580 [2024-10-14 14:28:31.251149] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:50.580 [2024-10-14 14:28:31.251165] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:50.580 [2024-10-14 14:28:31.251171] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:50.580 [2024-10-14 14:28:31.252022] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:50.580 [2024-10-14 14:28:31.252032] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:50.580 [2024-10-14 14:28:31.252039] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:50.580 [2024-10-14 14:28:31.253031] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:50.581 [2024-10-14 14:28:31.253040] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:50.581 [2024-10-14 14:28:31.253047] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:50.581 [2024-10-14 14:28:31.254039] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:50.581 [2024-10-14 14:28:31.254050] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:50.581 [2024-10-14 14:28:31.255046] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:50.581 [2024-10-14 14:28:31.255055] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:50.581 [2024-10-14 14:28:31.255068] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:50.581 [2024-10-14 14:28:31.255075] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:50.581 [2024-10-14 14:28:31.255181] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:50.581 [2024-10-14 14:28:31.255186] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:50.581 [2024-10-14 14:28:31.255191] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:50.581 [2024-10-14 14:28:31.256057] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:50.581 [2024-10-14 14:28:31.257065] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:50.581 [2024-10-14 14:28:31.258075] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:50.581 [2024-10-14 14:28:31.259077] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:50.581 [2024-10-14 14:28:31.259119] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:50.581 [2024-10-14 14:28:31.260089] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:50.581 [2024-10-14 14:28:31.260098] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:50.581 [2024-10-14 14:28:31.260103] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.260125] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:50.581 [2024-10-14 14:28:31.260132] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.260146] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:50.581 [2024-10-14 14:28:31.260151] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.581 [2024-10-14 14:28:31.260155] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.581 [2024-10-14 14:28:31.260167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.581 [2024-10-14 14:28:31.269074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:50.581 [2024-10-14 14:28:31.269086] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:50.581 [2024-10-14 14:28:31.269091] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:50.581 [2024-10-14 14:28:31.269096] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:50.581 [2024-10-14 14:28:31.269100] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:50.581 [2024-10-14 14:28:31.269105] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:50.581 [2024-10-14 14:28:31.269110] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:50.581 [2024-10-14 14:28:31.269117] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.269125] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.269136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:50.581 [2024-10-14 14:28:31.277070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:50.581 [2024-10-14 14:28:31.277083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.581 [2024-10-14 14:28:31.277092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.581 [2024-10-14 14:28:31.277100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.581 [2024-10-14 14:28:31.277108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.581 [2024-10-14 14:28:31.277113] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.277123] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.277132] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:50.581 [2024-10-14 14:28:31.285070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:50.581 [2024-10-14 14:28:31.285078] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:50.581 [2024-10-14 14:28:31.285083] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.285090] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.285098] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.285108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:50.581 [2024-10-14 14:28:31.293070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:50.581 [2024-10-14 14:28:31.293136] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.293144] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.293152] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:50.581 [2024-10-14 14:28:31.293157] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:50.581 [2024-10-14 14:28:31.293160] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.581 [2024-10-14 14:28:31.293166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:50.581 [2024-10-14 14:28:31.301071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:50.581 [2024-10-14 14:28:31.301085] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:50.581 [2024-10-14 14:28:31.301094] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.301101] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.301108] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:50.581 [2024-10-14 14:28:31.301113] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.581 [2024-10-14 14:28:31.301116] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.581 [2024-10-14 14:28:31.301123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.581 [2024-10-14 14:28:31.309070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:50.581 [2024-10-14 14:28:31.309085] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.309093] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:50.581 [2024-10-14 14:28:31.309101] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:50.581 [2024-10-14 14:28:31.309106] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.581 [2024-10-14 14:28:31.309109] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.581 [2024-10-14 14:28:31.309115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.843 [2024-10-14 14:28:31.317069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:50.843 [2024-10-14 14:28:31.317080] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:50.843 [2024-10-14 14:28:31.317087] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:50.843 [2024-10-14 14:28:31.317097] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:50.843 [2024-10-14 14:28:31.317103] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:50.843 [2024-10-14 14:28:31.317108] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:50.843 [2024-10-14 14:28:31.317113] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:50.843 [2024-10-14 14:28:31.317118] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:50.843 [2024-10-14 14:28:31.317123] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:50.843 [2024-10-14 14:28:31.317128] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:50.843 [2024-10-14 14:28:31.317145] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:50.843 [2024-10-14 14:28:31.325070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:50.843 [2024-10-14 14:28:31.325084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:50.843 [2024-10-14 14:28:31.333069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:50.843 [2024-10-14 14:28:31.333083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:50.843 [2024-10-14 14:28:31.341071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:50.843 [2024-10-14 14:28:31.341085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:50.843 [2024-10-14 14:28:31.349071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:50.844 [2024-10-14 14:28:31.349090] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:50.844 [2024-10-14 14:28:31.349095] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:50.844 [2024-10-14 14:28:31.349099] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:50.844 [2024-10-14 14:28:31.349103] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:50.844 [2024-10-14 14:28:31.349106] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:50.844 [2024-10-14 14:28:31.349112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:50.844 [2024-10-14 14:28:31.349120] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:50.844 [2024-10-14 14:28:31.349125] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:50.844 [2024-10-14 14:28:31.349128] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.844 [2024-10-14 14:28:31.349134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:50.844 [2024-10-14 14:28:31.349142] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:50.844 [2024-10-14 14:28:31.349146] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.844 [2024-10-14 14:28:31.349149] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.844 [2024-10-14 14:28:31.349155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.844 [2024-10-14 14:28:31.349163] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:50.844 [2024-10-14 14:28:31.349168] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:50.844 [2024-10-14 14:28:31.349171] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.844 [2024-10-14 14:28:31.349177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:50.844 [2024-10-14 14:28:31.357072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:50.844 [2024-10-14 14:28:31.357088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:50.844 [2024-10-14 14:28:31.357098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:50.844 [2024-10-14 14:28:31.357106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:50.844 ===================================================== 00:14:50.844 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:50.844 ===================================================== 00:14:50.844 Controller Capabilities/Features 00:14:50.844 ================================ 00:14:50.844 Vendor ID: 4e58 00:14:50.844 Subsystem Vendor ID: 4e58 00:14:50.844 Serial Number: SPDK2 00:14:50.844 Model Number: SPDK bdev Controller 00:14:50.844 Firmware Version: 25.01 00:14:50.844 Recommended Arb Burst: 6 00:14:50.844 IEEE OUI Identifier: 8d 6b 50 00:14:50.844 Multi-path I/O 00:14:50.844 May have multiple subsystem ports: Yes 00:14:50.844 May have multiple controllers: Yes 00:14:50.844 Associated with SR-IOV VF: No 00:14:50.844 Max Data Transfer Size: 131072 00:14:50.844 Max Number of Namespaces: 32 00:14:50.844 Max Number of I/O Queues: 127 00:14:50.844 NVMe Specification Version (VS): 1.3 00:14:50.844 NVMe Specification Version (Identify): 1.3 00:14:50.844 Maximum Queue Entries: 256 00:14:50.844 Contiguous Queues Required: Yes 00:14:50.844 Arbitration Mechanisms Supported 00:14:50.844 Weighted Round Robin: Not Supported 00:14:50.844 Vendor Specific: Not Supported 00:14:50.844 Reset Timeout: 15000 ms 00:14:50.844 Doorbell Stride: 4 bytes 00:14:50.844 NVM Subsystem Reset: Not Supported 00:14:50.844 Command Sets Supported 00:14:50.844 NVM Command Set: Supported 00:14:50.844 Boot Partition: Not Supported 00:14:50.844 Memory Page Size Minimum: 4096 bytes 00:14:50.844 Memory Page Size Maximum: 4096 bytes 00:14:50.844 Persistent Memory Region: Not Supported 00:14:50.844 Optional Asynchronous Events Supported 00:14:50.844 Namespace Attribute Notices: Supported 00:14:50.844 Firmware Activation Notices: Not Supported 00:14:50.844 ANA Change Notices: Not Supported 00:14:50.844 PLE Aggregate Log Change Notices: Not Supported 00:14:50.844 LBA Status Info Alert Notices: Not Supported 00:14:50.844 EGE Aggregate Log Change Notices: Not Supported 00:14:50.844 Normal NVM Subsystem Shutdown event: Not Supported 00:14:50.844 Zone Descriptor Change Notices: Not Supported 00:14:50.844 Discovery Log Change Notices: Not Supported 00:14:50.844 Controller Attributes 00:14:50.844 128-bit Host Identifier: Supported 00:14:50.844 Non-Operational Permissive Mode: Not Supported 00:14:50.844 NVM Sets: Not Supported 00:14:50.844 Read Recovery Levels: Not Supported 00:14:50.844 Endurance Groups: Not Supported 00:14:50.844 Predictable Latency Mode: Not Supported 00:14:50.844 Traffic Based Keep ALive: Not Supported 00:14:50.844 Namespace Granularity: Not Supported 00:14:50.844 SQ Associations: Not Supported 00:14:50.844 UUID List: Not Supported 00:14:50.844 Multi-Domain Subsystem: Not Supported 00:14:50.844 Fixed Capacity Management: Not Supported 00:14:50.844 Variable Capacity Management: Not Supported 00:14:50.844 Delete Endurance Group: Not Supported 00:14:50.844 Delete NVM Set: Not Supported 00:14:50.844 Extended LBA Formats Supported: Not Supported 00:14:50.844 Flexible Data Placement Supported: Not Supported 00:14:50.844 00:14:50.844 Controller Memory Buffer Support 00:14:50.844 ================================ 00:14:50.844 Supported: No 00:14:50.844 00:14:50.844 Persistent Memory Region Support 00:14:50.844 ================================ 00:14:50.844 Supported: No 00:14:50.844 00:14:50.844 Admin Command Set Attributes 00:14:50.844 ============================ 00:14:50.844 Security Send/Receive: Not Supported 00:14:50.844 Format NVM: Not Supported 00:14:50.844 Firmware Activate/Download: Not Supported 00:14:50.844 Namespace Management: Not Supported 00:14:50.844 Device Self-Test: Not Supported 00:14:50.844 Directives: Not Supported 00:14:50.844 NVMe-MI: Not Supported 00:14:50.844 Virtualization Management: Not Supported 00:14:50.844 Doorbell Buffer Config: Not Supported 00:14:50.844 Get LBA Status Capability: Not Supported 00:14:50.844 Command & Feature Lockdown Capability: Not Supported 00:14:50.844 Abort Command Limit: 4 00:14:50.844 Async Event Request Limit: 4 00:14:50.844 Number of Firmware Slots: N/A 00:14:50.844 Firmware Slot 1 Read-Only: N/A 00:14:50.844 Firmware Activation Without Reset: N/A 00:14:50.844 Multiple Update Detection Support: N/A 00:14:50.844 Firmware Update Granularity: No Information Provided 00:14:50.844 Per-Namespace SMART Log: No 00:14:50.844 Asymmetric Namespace Access Log Page: Not Supported 00:14:50.844 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:50.844 Command Effects Log Page: Supported 00:14:50.844 Get Log Page Extended Data: Supported 00:14:50.844 Telemetry Log Pages: Not Supported 00:14:50.844 Persistent Event Log Pages: Not Supported 00:14:50.844 Supported Log Pages Log Page: May Support 00:14:50.844 Commands Supported & Effects Log Page: Not Supported 00:14:50.844 Feature Identifiers & Effects Log Page:May Support 00:14:50.844 NVMe-MI Commands & Effects Log Page: May Support 00:14:50.844 Data Area 4 for Telemetry Log: Not Supported 00:14:50.844 Error Log Page Entries Supported: 128 00:14:50.844 Keep Alive: Supported 00:14:50.844 Keep Alive Granularity: 10000 ms 00:14:50.844 00:14:50.844 NVM Command Set Attributes 00:14:50.844 ========================== 00:14:50.844 Submission Queue Entry Size 00:14:50.844 Max: 64 00:14:50.844 Min: 64 00:14:50.844 Completion Queue Entry Size 00:14:50.844 Max: 16 00:14:50.844 Min: 16 00:14:50.844 Number of Namespaces: 32 00:14:50.844 Compare Command: Supported 00:14:50.844 Write Uncorrectable Command: Not Supported 00:14:50.844 Dataset Management Command: Supported 00:14:50.844 Write Zeroes Command: Supported 00:14:50.844 Set Features Save Field: Not Supported 00:14:50.844 Reservations: Not Supported 00:14:50.844 Timestamp: Not Supported 00:14:50.844 Copy: Supported 00:14:50.844 Volatile Write Cache: Present 00:14:50.844 Atomic Write Unit (Normal): 1 00:14:50.844 Atomic Write Unit (PFail): 1 00:14:50.844 Atomic Compare & Write Unit: 1 00:14:50.844 Fused Compare & Write: Supported 00:14:50.844 Scatter-Gather List 00:14:50.844 SGL Command Set: Supported (Dword aligned) 00:14:50.844 SGL Keyed: Not Supported 00:14:50.844 SGL Bit Bucket Descriptor: Not Supported 00:14:50.844 SGL Metadata Pointer: Not Supported 00:14:50.844 Oversized SGL: Not Supported 00:14:50.844 SGL Metadata Address: Not Supported 00:14:50.844 SGL Offset: Not Supported 00:14:50.844 Transport SGL Data Block: Not Supported 00:14:50.844 Replay Protected Memory Block: Not Supported 00:14:50.844 00:14:50.844 Firmware Slot Information 00:14:50.844 ========================= 00:14:50.844 Active slot: 1 00:14:50.844 Slot 1 Firmware Revision: 25.01 00:14:50.844 00:14:50.844 00:14:50.844 Commands Supported and Effects 00:14:50.844 ============================== 00:14:50.844 Admin Commands 00:14:50.844 -------------- 00:14:50.844 Get Log Page (02h): Supported 00:14:50.844 Identify (06h): Supported 00:14:50.844 Abort (08h): Supported 00:14:50.844 Set Features (09h): Supported 00:14:50.844 Get Features (0Ah): Supported 00:14:50.844 Asynchronous Event Request (0Ch): Supported 00:14:50.845 Keep Alive (18h): Supported 00:14:50.845 I/O Commands 00:14:50.845 ------------ 00:14:50.845 Flush (00h): Supported LBA-Change 00:14:50.845 Write (01h): Supported LBA-Change 00:14:50.845 Read (02h): Supported 00:14:50.845 Compare (05h): Supported 00:14:50.845 Write Zeroes (08h): Supported LBA-Change 00:14:50.845 Dataset Management (09h): Supported LBA-Change 00:14:50.845 Copy (19h): Supported LBA-Change 00:14:50.845 00:14:50.845 Error Log 00:14:50.845 ========= 00:14:50.845 00:14:50.845 Arbitration 00:14:50.845 =========== 00:14:50.845 Arbitration Burst: 1 00:14:50.845 00:14:50.845 Power Management 00:14:50.845 ================ 00:14:50.845 Number of Power States: 1 00:14:50.845 Current Power State: Power State #0 00:14:50.845 Power State #0: 00:14:50.845 Max Power: 0.00 W 00:14:50.845 Non-Operational State: Operational 00:14:50.845 Entry Latency: Not Reported 00:14:50.845 Exit Latency: Not Reported 00:14:50.845 Relative Read Throughput: 0 00:14:50.845 Relative Read Latency: 0 00:14:50.845 Relative Write Throughput: 0 00:14:50.845 Relative Write Latency: 0 00:14:50.845 Idle Power: Not Reported 00:14:50.845 Active Power: Not Reported 00:14:50.845 Non-Operational Permissive Mode: Not Supported 00:14:50.845 00:14:50.845 Health Information 00:14:50.845 ================== 00:14:50.845 Critical Warnings: 00:14:50.845 Available Spare Space: OK 00:14:50.845 Temperature: OK 00:14:50.845 Device Reliability: OK 00:14:50.845 Read Only: No 00:14:50.845 Volatile Memory Backup: OK 00:14:50.845 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:50.845 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:50.845 Available Spare: 0% 00:14:50.845 Available Sp[2024-10-14 14:28:31.357205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:50.845 [2024-10-14 14:28:31.365071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:50.845 [2024-10-14 14:28:31.365105] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:50.845 [2024-10-14 14:28:31.365115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.845 [2024-10-14 14:28:31.365122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.845 [2024-10-14 14:28:31.365128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.845 [2024-10-14 14:28:31.365135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.845 [2024-10-14 14:28:31.365177] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:50.845 [2024-10-14 14:28:31.365188] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:50.845 [2024-10-14 14:28:31.366184] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:50.845 [2024-10-14 14:28:31.366233] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:50.845 [2024-10-14 14:28:31.366240] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:50.845 [2024-10-14 14:28:31.367186] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:50.845 [2024-10-14 14:28:31.367198] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:50.845 [2024-10-14 14:28:31.367252] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:50.845 [2024-10-14 14:28:31.368630] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:50.845 are Threshold: 0% 00:14:50.845 Life Percentage Used: 0% 00:14:50.845 Data Units Read: 0 00:14:50.845 Data Units Written: 0 00:14:50.845 Host Read Commands: 0 00:14:50.845 Host Write Commands: 0 00:14:50.845 Controller Busy Time: 0 minutes 00:14:50.845 Power Cycles: 0 00:14:50.845 Power On Hours: 0 hours 00:14:50.845 Unsafe Shutdowns: 0 00:14:50.845 Unrecoverable Media Errors: 0 00:14:50.845 Lifetime Error Log Entries: 0 00:14:50.845 Warning Temperature Time: 0 minutes 00:14:50.845 Critical Temperature Time: 0 minutes 00:14:50.845 00:14:50.845 Number of Queues 00:14:50.845 ================ 00:14:50.845 Number of I/O Submission Queues: 127 00:14:50.845 Number of I/O Completion Queues: 127 00:14:50.845 00:14:50.845 Active Namespaces 00:14:50.845 ================= 00:14:50.845 Namespace ID:1 00:14:50.845 Error Recovery Timeout: Unlimited 00:14:50.845 Command Set Identifier: NVM (00h) 00:14:50.845 Deallocate: Supported 00:14:50.845 Deallocated/Unwritten Error: Not Supported 00:14:50.845 Deallocated Read Value: Unknown 00:14:50.845 Deallocate in Write Zeroes: Not Supported 00:14:50.845 Deallocated Guard Field: 0xFFFF 00:14:50.845 Flush: Supported 00:14:50.845 Reservation: Supported 00:14:50.845 Namespace Sharing Capabilities: Multiple Controllers 00:14:50.845 Size (in LBAs): 131072 (0GiB) 00:14:50.845 Capacity (in LBAs): 131072 (0GiB) 00:14:50.845 Utilization (in LBAs): 131072 (0GiB) 00:14:50.845 NGUID: C0A6196F966A49A6A112EEB76BC9AE4A 00:14:50.845 UUID: c0a6196f-966a-49a6-a112-eeb76bc9ae4a 00:14:50.845 Thin Provisioning: Not Supported 00:14:50.845 Per-NS Atomic Units: Yes 00:14:50.845 Atomic Boundary Size (Normal): 0 00:14:50.845 Atomic Boundary Size (PFail): 0 00:14:50.845 Atomic Boundary Offset: 0 00:14:50.845 Maximum Single Source Range Length: 65535 00:14:50.845 Maximum Copy Length: 65535 00:14:50.845 Maximum Source Range Count: 1 00:14:50.845 NGUID/EUI64 Never Reused: No 00:14:50.845 Namespace Write Protected: No 00:14:50.845 Number of LBA Formats: 1 00:14:50.845 Current LBA Format: LBA Format #00 00:14:50.845 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:50.845 00:14:50.845 14:28:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:50.845 [2024-10-14 14:28:31.565159] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:56.131 Initializing NVMe Controllers 00:14:56.131 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:56.131 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:56.131 Initialization complete. Launching workers. 00:14:56.131 ======================================================== 00:14:56.131 Latency(us) 00:14:56.131 Device Information : IOPS MiB/s Average min max 00:14:56.131 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39950.80 156.06 3206.33 845.19 10777.94 00:14:56.131 ======================================================== 00:14:56.131 Total : 39950.80 156.06 3206.33 845.19 10777.94 00:14:56.131 00:14:56.131 [2024-10-14 14:28:36.672267] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:56.131 14:28:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:56.131 [2024-10-14 14:28:36.853989] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.418 Initializing NVMe Controllers 00:15:01.418 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:01.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:01.418 Initialization complete. Launching workers. 00:15:01.418 ======================================================== 00:15:01.418 Latency(us) 00:15:01.418 Device Information : IOPS MiB/s Average min max 00:15:01.418 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34884.98 136.27 3668.66 1107.00 9302.36 00:15:01.418 ======================================================== 00:15:01.418 Total : 34884.98 136.27 3668.66 1107.00 9302.36 00:15:01.418 00:15:01.418 [2024-10-14 14:28:41.875022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.418 14:28:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:01.418 [2024-10-14 14:28:42.068459] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:06.705 [2024-10-14 14:28:47.202146] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:06.705 Initializing NVMe Controllers 00:15:06.705 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:06.705 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:06.705 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:06.705 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:06.705 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:06.705 Initialization complete. Launching workers. 00:15:06.705 Starting thread on core 2 00:15:06.705 Starting thread on core 3 00:15:06.705 Starting thread on core 1 00:15:06.705 14:28:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:06.966 [2024-10-14 14:28:47.468476] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:10.267 [2024-10-14 14:28:50.610846] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:10.267 Initializing NVMe Controllers 00:15:10.267 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:10.267 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:10.267 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:10.267 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:10.267 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:10.267 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:10.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:10.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:10.267 Initialization complete. Launching workers. 00:15:10.267 Starting thread on core 1 with urgent priority queue 00:15:10.267 Starting thread on core 2 with urgent priority queue 00:15:10.267 Starting thread on core 3 with urgent priority queue 00:15:10.267 Starting thread on core 0 with urgent priority queue 00:15:10.267 SPDK bdev Controller (SPDK2 ) core 0: 13226.00 IO/s 7.56 secs/100000 ios 00:15:10.267 SPDK bdev Controller (SPDK2 ) core 1: 10065.67 IO/s 9.93 secs/100000 ios 00:15:10.267 SPDK bdev Controller (SPDK2 ) core 2: 9538.33 IO/s 10.48 secs/100000 ios 00:15:10.267 SPDK bdev Controller (SPDK2 ) core 3: 9734.67 IO/s 10.27 secs/100000 ios 00:15:10.267 ======================================================== 00:15:10.267 00:15:10.267 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:10.267 [2024-10-14 14:28:50.881501] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:10.267 Initializing NVMe Controllers 00:15:10.267 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:10.267 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:10.267 Namespace ID: 1 size: 0GB 00:15:10.267 Initialization complete. 00:15:10.267 INFO: using host memory buffer for IO 00:15:10.267 Hello world! 00:15:10.267 [2024-10-14 14:28:50.891571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:10.267 14:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:10.527 [2024-10-14 14:28:51.152351] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.913 Initializing NVMe Controllers 00:15:11.913 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:11.913 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:11.913 Initialization complete. Launching workers. 00:15:11.913 submit (in ns) avg, min, max = 8298.9, 3932.5, 4001288.3 00:15:11.913 complete (in ns) avg, min, max = 17806.8, 2392.5, 4000889.2 00:15:11.913 00:15:11.913 Submit histogram 00:15:11.913 ================ 00:15:11.913 Range in us Cumulative Count 00:15:11.913 3.920 - 3.947: 0.1525% ( 29) 00:15:11.913 3.947 - 3.973: 4.2022% ( 770) 00:15:11.913 3.973 - 4.000: 11.3916% ( 1367) 00:15:11.913 4.000 - 4.027: 21.4631% ( 1915) 00:15:11.913 4.027 - 4.053: 32.8021% ( 2156) 00:15:11.913 4.053 - 4.080: 45.4349% ( 2402) 00:15:11.913 4.080 - 4.107: 61.3653% ( 3029) 00:15:11.913 4.107 - 4.133: 78.2055% ( 3202) 00:15:11.913 4.133 - 4.160: 90.1231% ( 2266) 00:15:11.913 4.160 - 4.187: 95.7189% ( 1064) 00:15:11.913 4.187 - 4.213: 98.1961% ( 471) 00:15:11.913 4.213 - 4.240: 99.1427% ( 180) 00:15:11.913 4.240 - 4.267: 99.3794% ( 45) 00:15:11.913 4.267 - 4.293: 99.4320% ( 10) 00:15:11.913 4.293 - 4.320: 99.4530% ( 4) 00:15:11.913 4.400 - 4.427: 99.4583% ( 1) 00:15:11.913 4.533 - 4.560: 99.4636% ( 1) 00:15:11.913 4.640 - 4.667: 99.4688% ( 1) 00:15:11.913 4.667 - 4.693: 99.4741% ( 1) 00:15:11.913 4.800 - 4.827: 99.4846% ( 2) 00:15:11.913 4.827 - 4.853: 99.4951% ( 2) 00:15:11.913 4.880 - 4.907: 99.5004% ( 1) 00:15:11.913 4.960 - 4.987: 99.5056% ( 1) 00:15:11.913 4.987 - 5.013: 99.5109% ( 1) 00:15:11.913 5.093 - 5.120: 99.5161% ( 1) 00:15:11.913 5.173 - 5.200: 99.5214% ( 1) 00:15:11.913 5.307 - 5.333: 99.5372% ( 3) 00:15:11.913 5.333 - 5.360: 99.5424% ( 1) 00:15:11.913 5.440 - 5.467: 99.5477% ( 1) 00:15:11.913 5.760 - 5.787: 99.5530% ( 1) 00:15:11.913 5.813 - 5.840: 99.5582% ( 1) 00:15:11.913 5.893 - 5.920: 99.5635% ( 1) 00:15:11.913 6.000 - 6.027: 99.5687% ( 1) 00:15:11.913 6.080 - 6.107: 99.5845% ( 3) 00:15:11.913 6.160 - 6.187: 99.5898% ( 1) 00:15:11.913 6.213 - 6.240: 99.5950% ( 1) 00:15:11.913 6.240 - 6.267: 99.6003% ( 1) 00:15:11.913 6.267 - 6.293: 99.6056% ( 1) 00:15:11.913 6.293 - 6.320: 99.6108% ( 1) 00:15:11.913 6.400 - 6.427: 99.6161% ( 1) 00:15:11.913 6.560 - 6.587: 99.6213% ( 1) 00:15:11.913 6.800 - 6.827: 99.6266% ( 1) 00:15:11.913 6.827 - 6.880: 99.6319% ( 1) 00:15:11.913 6.933 - 6.987: 99.6371% ( 1) 00:15:11.913 7.040 - 7.093: 99.6424% ( 1) 00:15:11.913 7.147 - 7.200: 99.6476% ( 1) 00:15:11.913 7.200 - 7.253: 99.6581% ( 2) 00:15:11.913 7.307 - 7.360: 99.6687% ( 2) 00:15:11.913 7.413 - 7.467: 99.6792% ( 2) 00:15:11.913 7.520 - 7.573: 99.6950% ( 3) 00:15:11.913 7.573 - 7.627: 99.7107% ( 3) 00:15:11.913 7.627 - 7.680: 99.7160% ( 1) 00:15:11.913 7.680 - 7.733: 99.7265% ( 2) 00:15:11.913 7.787 - 7.840: 99.7318% ( 1) 00:15:11.913 7.840 - 7.893: 99.7370% ( 1) 00:15:11.913 7.893 - 7.947: 99.7423% ( 1) 00:15:11.913 8.053 - 8.107: 99.7581% ( 3) 00:15:11.913 8.160 - 8.213: 99.7686% ( 2) 00:15:11.913 8.213 - 8.267: 99.7739% ( 1) 00:15:11.913 8.320 - 8.373: 99.7791% ( 1) 00:15:11.913 8.373 - 8.427: 99.7949% ( 3) 00:15:11.913 8.427 - 8.480: 99.8001% ( 1) 00:15:11.913 8.480 - 8.533: 99.8054% ( 1) 00:15:11.913 8.587 - 8.640: 99.8212% ( 3) 00:15:11.913 8.640 - 8.693: 99.8264% ( 1) 00:15:11.913 8.747 - 8.800: 99.8317% ( 1) 00:15:11.913 8.960 - 9.013: 99.8422% ( 2) 00:15:11.913 9.013 - 9.067: 99.8475% ( 1) 00:15:11.913 9.067 - 9.120: 99.8580% ( 2) 00:15:11.913 9.173 - 9.227: 99.8633% ( 1) 00:15:11.913 9.227 - 9.280: 99.8685% ( 1) 00:15:11.913 9.333 - 9.387: 99.8738% ( 1) 00:15:11.913 9.493 - 9.547: 99.8790% ( 1) 00:15:11.913 10.240 - 10.293: 99.8843% ( 1) 00:15:11.913 11.733 - 11.787: 99.8896% ( 1) 00:15:11.913 16.107 - 16.213: 99.8948% ( 1) 00:15:11.913 3986.773 - 4014.080: 100.0000% ( 20) 00:15:11.913 00:15:11.913 Complete histogram 00:15:11.913 ================== 00:15:11.913 Range in us Cumulative Count 00:15:11.913 2.387 - [2024-10-14 14:28:52.247763] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:11.913 2.400: 0.0053% ( 1) 00:15:11.913 2.400 - 2.413: 0.2209% ( 41) 00:15:11.913 2.413 - 2.427: 1.0150% ( 151) 00:15:11.913 2.427 - 2.440: 1.1150% ( 19) 00:15:11.913 2.440 - 2.453: 1.2044% ( 17) 00:15:11.913 2.453 - 2.467: 43.5206% ( 8046) 00:15:11.913 2.467 - 2.480: 58.7094% ( 2888) 00:15:11.913 2.480 - 2.493: 72.3151% ( 2587) 00:15:11.913 2.493 - 2.507: 78.2476% ( 1128) 00:15:11.913 2.507 - 2.520: 80.9035% ( 505) 00:15:11.913 2.520 - 2.533: 83.4964% ( 493) 00:15:11.913 2.533 - 2.547: 88.9345% ( 1034) 00:15:11.913 2.547 - 2.560: 94.3936% ( 1038) 00:15:11.913 2.560 - 2.573: 97.0601% ( 507) 00:15:11.913 2.573 - 2.587: 98.3959% ( 254) 00:15:11.913 2.587 - 2.600: 99.0428% ( 123) 00:15:11.913 2.600 - 2.613: 99.2111% ( 32) 00:15:11.913 2.613 - 2.627: 99.2795% ( 13) 00:15:11.913 2.627 - 2.640: 99.2953% ( 3) 00:15:11.913 2.680 - 2.693: 99.3005% ( 1) 00:15:11.913 3.080 - 3.093: 99.3058% ( 1) 00:15:11.913 3.093 - 3.107: 99.3110% ( 1) 00:15:11.913 4.507 - 4.533: 99.3163% ( 1) 00:15:11.913 4.613 - 4.640: 99.3216% ( 1) 00:15:11.913 4.667 - 4.693: 99.3268% ( 1) 00:15:11.913 4.747 - 4.773: 99.3321% ( 1) 00:15:11.913 4.880 - 4.907: 99.3373% ( 1) 00:15:11.913 4.907 - 4.933: 99.3426% ( 1) 00:15:11.913 5.013 - 5.040: 99.3478% ( 1) 00:15:11.913 5.067 - 5.093: 99.3531% ( 1) 00:15:11.913 5.520 - 5.547: 99.3584% ( 1) 00:15:11.913 5.600 - 5.627: 99.3636% ( 1) 00:15:11.913 5.627 - 5.653: 99.3689% ( 1) 00:15:11.913 5.680 - 5.707: 99.3741% ( 1) 00:15:11.913 5.787 - 5.813: 99.3794% ( 1) 00:15:11.913 5.813 - 5.840: 99.3847% ( 1) 00:15:11.913 5.947 - 5.973: 99.3899% ( 1) 00:15:11.913 6.000 - 6.027: 99.4004% ( 2) 00:15:11.913 6.027 - 6.053: 99.4057% ( 1) 00:15:11.913 6.080 - 6.107: 99.4162% ( 2) 00:15:11.913 6.107 - 6.133: 99.4215% ( 1) 00:15:11.913 6.160 - 6.187: 99.4267% ( 1) 00:15:11.913 6.240 - 6.267: 99.4320% ( 1) 00:15:11.913 6.293 - 6.320: 99.4425% ( 2) 00:15:11.913 6.320 - 6.347: 99.4478% ( 1) 00:15:11.913 6.400 - 6.427: 99.4530% ( 1) 00:15:11.913 6.427 - 6.453: 99.4688% ( 3) 00:15:11.913 6.507 - 6.533: 99.4793% ( 2) 00:15:11.913 6.533 - 6.560: 99.4846% ( 1) 00:15:11.913 6.613 - 6.640: 99.4898% ( 1) 00:15:11.913 6.667 - 6.693: 99.4951% ( 1) 00:15:11.913 6.773 - 6.800: 99.5056% ( 2) 00:15:11.913 6.880 - 6.933: 99.5214% ( 3) 00:15:11.913 7.040 - 7.093: 99.5319% ( 2) 00:15:11.913 7.200 - 7.253: 99.5424% ( 2) 00:15:11.913 7.307 - 7.360: 99.5477% ( 1) 00:15:11.913 7.413 - 7.467: 99.5530% ( 1) 00:15:11.913 7.467 - 7.520: 99.5582% ( 1) 00:15:11.913 7.733 - 7.787: 99.5635% ( 1) 00:15:11.913 7.787 - 7.840: 99.5687% ( 1) 00:15:11.913 7.893 - 7.947: 99.5740% ( 1) 00:15:11.913 8.213 - 8.267: 99.5793% ( 1) 00:15:11.914 8.267 - 8.320: 99.5845% ( 1) 00:15:11.914 9.013 - 9.067: 99.5898% ( 1) 00:15:11.914 13.493 - 13.547: 99.5950% ( 1) 00:15:11.914 13.600 - 13.653: 99.6003% ( 1) 00:15:11.914 13.973 - 14.080: 99.6056% ( 1) 00:15:11.914 14.293 - 14.400: 99.6108% ( 1) 00:15:11.914 14.933 - 15.040: 99.6161% ( 1) 00:15:11.914 3358.720 - 3372.373: 99.6213% ( 1) 00:15:11.914 3986.773 - 4014.080: 100.0000% ( 72) 00:15:11.914 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.914 [ 00:15:11.914 { 00:15:11.914 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:11.914 "subtype": "Discovery", 00:15:11.914 "listen_addresses": [], 00:15:11.914 "allow_any_host": true, 00:15:11.914 "hosts": [] 00:15:11.914 }, 00:15:11.914 { 00:15:11.914 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:11.914 "subtype": "NVMe", 00:15:11.914 "listen_addresses": [ 00:15:11.914 { 00:15:11.914 "trtype": "VFIOUSER", 00:15:11.914 "adrfam": "IPv4", 00:15:11.914 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:11.914 "trsvcid": "0" 00:15:11.914 } 00:15:11.914 ], 00:15:11.914 "allow_any_host": true, 00:15:11.914 "hosts": [], 00:15:11.914 "serial_number": "SPDK1", 00:15:11.914 "model_number": "SPDK bdev Controller", 00:15:11.914 "max_namespaces": 32, 00:15:11.914 "min_cntlid": 1, 00:15:11.914 "max_cntlid": 65519, 00:15:11.914 "namespaces": [ 00:15:11.914 { 00:15:11.914 "nsid": 1, 00:15:11.914 "bdev_name": "Malloc1", 00:15:11.914 "name": "Malloc1", 00:15:11.914 "nguid": "C04E796C1FE24E97AFADDD5B9C0C717B", 00:15:11.914 "uuid": "c04e796c-1fe2-4e97-afad-dd5b9c0c717b" 00:15:11.914 }, 00:15:11.914 { 00:15:11.914 "nsid": 2, 00:15:11.914 "bdev_name": "Malloc3", 00:15:11.914 "name": "Malloc3", 00:15:11.914 "nguid": "429F6E3CF4C64CCC9D170F667D3FB7D4", 00:15:11.914 "uuid": "429f6e3c-f4c6-4ccc-9d17-0f667d3fb7d4" 00:15:11.914 } 00:15:11.914 ] 00:15:11.914 }, 00:15:11.914 { 00:15:11.914 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:11.914 "subtype": "NVMe", 00:15:11.914 "listen_addresses": [ 00:15:11.914 { 00:15:11.914 "trtype": "VFIOUSER", 00:15:11.914 "adrfam": "IPv4", 00:15:11.914 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:11.914 "trsvcid": "0" 00:15:11.914 } 00:15:11.914 ], 00:15:11.914 "allow_any_host": true, 00:15:11.914 "hosts": [], 00:15:11.914 "serial_number": "SPDK2", 00:15:11.914 "model_number": "SPDK bdev Controller", 00:15:11.914 "max_namespaces": 32, 00:15:11.914 "min_cntlid": 1, 00:15:11.914 "max_cntlid": 65519, 00:15:11.914 "namespaces": [ 00:15:11.914 { 00:15:11.914 "nsid": 1, 00:15:11.914 "bdev_name": "Malloc2", 00:15:11.914 "name": "Malloc2", 00:15:11.914 "nguid": "C0A6196F966A49A6A112EEB76BC9AE4A", 00:15:11.914 "uuid": "c0a6196f-966a-49a6-a112-eeb76bc9ae4a" 00:15:11.914 } 00:15:11.914 ] 00:15:11.914 } 00:15:11.914 ] 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3349088 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:11.914 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:11.914 [2024-10-14 14:28:52.642471] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:12.174 Malloc4 00:15:12.175 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:12.175 [2024-10-14 14:28:52.828738] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.175 14:28:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:12.175 Asynchronous Event Request test 00:15:12.175 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:12.175 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:12.175 Registering asynchronous event callbacks... 00:15:12.175 Starting namespace attribute notice tests for all controllers... 00:15:12.175 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:12.175 aer_cb - Changed Namespace 00:15:12.175 Cleaning up... 00:15:12.436 [ 00:15:12.436 { 00:15:12.436 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:12.436 "subtype": "Discovery", 00:15:12.436 "listen_addresses": [], 00:15:12.436 "allow_any_host": true, 00:15:12.436 "hosts": [] 00:15:12.436 }, 00:15:12.436 { 00:15:12.436 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:12.436 "subtype": "NVMe", 00:15:12.436 "listen_addresses": [ 00:15:12.436 { 00:15:12.436 "trtype": "VFIOUSER", 00:15:12.436 "adrfam": "IPv4", 00:15:12.437 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:12.437 "trsvcid": "0" 00:15:12.437 } 00:15:12.437 ], 00:15:12.437 "allow_any_host": true, 00:15:12.437 "hosts": [], 00:15:12.437 "serial_number": "SPDK1", 00:15:12.437 "model_number": "SPDK bdev Controller", 00:15:12.437 "max_namespaces": 32, 00:15:12.437 "min_cntlid": 1, 00:15:12.437 "max_cntlid": 65519, 00:15:12.437 "namespaces": [ 00:15:12.437 { 00:15:12.437 "nsid": 1, 00:15:12.437 "bdev_name": "Malloc1", 00:15:12.437 "name": "Malloc1", 00:15:12.437 "nguid": "C04E796C1FE24E97AFADDD5B9C0C717B", 00:15:12.437 "uuid": "c04e796c-1fe2-4e97-afad-dd5b9c0c717b" 00:15:12.437 }, 00:15:12.437 { 00:15:12.437 "nsid": 2, 00:15:12.437 "bdev_name": "Malloc3", 00:15:12.437 "name": "Malloc3", 00:15:12.437 "nguid": "429F6E3CF4C64CCC9D170F667D3FB7D4", 00:15:12.437 "uuid": "429f6e3c-f4c6-4ccc-9d17-0f667d3fb7d4" 00:15:12.437 } 00:15:12.437 ] 00:15:12.437 }, 00:15:12.437 { 00:15:12.437 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:12.437 "subtype": "NVMe", 00:15:12.437 "listen_addresses": [ 00:15:12.437 { 00:15:12.437 "trtype": "VFIOUSER", 00:15:12.437 "adrfam": "IPv4", 00:15:12.437 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:12.437 "trsvcid": "0" 00:15:12.437 } 00:15:12.437 ], 00:15:12.437 "allow_any_host": true, 00:15:12.437 "hosts": [], 00:15:12.437 "serial_number": "SPDK2", 00:15:12.437 "model_number": "SPDK bdev Controller", 00:15:12.437 "max_namespaces": 32, 00:15:12.437 "min_cntlid": 1, 00:15:12.437 "max_cntlid": 65519, 00:15:12.437 "namespaces": [ 00:15:12.437 { 00:15:12.437 "nsid": 1, 00:15:12.437 "bdev_name": "Malloc2", 00:15:12.437 "name": "Malloc2", 00:15:12.437 "nguid": "C0A6196F966A49A6A112EEB76BC9AE4A", 00:15:12.437 "uuid": "c0a6196f-966a-49a6-a112-eeb76bc9ae4a" 00:15:12.437 }, 00:15:12.437 { 00:15:12.437 "nsid": 2, 00:15:12.437 "bdev_name": "Malloc4", 00:15:12.437 "name": "Malloc4", 00:15:12.437 "nguid": "27FBB0AF3B124F2DBF474C2033695549", 00:15:12.437 "uuid": "27fbb0af-3b12-4f2d-bf47-4c2033695549" 00:15:12.437 } 00:15:12.437 ] 00:15:12.437 } 00:15:12.437 ] 00:15:12.437 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3349088 00:15:12.437 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:12.437 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3340007 00:15:12.437 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3340007 ']' 00:15:12.437 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3340007 00:15:12.437 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:12.437 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.437 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3340007 00:15:12.437 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:12.437 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:12.437 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3340007' 00:15:12.437 killing process with pid 3340007 00:15:12.437 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3340007 00:15:12.437 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3340007 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3349234 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3349234' 00:15:12.698 Process pid: 3349234 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3349234 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3349234 ']' 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.698 14:28:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:12.698 [2024-10-14 14:28:53.331360] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:12.698 [2024-10-14 14:28:53.332354] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:15:12.698 [2024-10-14 14:28:53.332403] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.698 [2024-10-14 14:28:53.397632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.960 [2024-10-14 14:28:53.435215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.960 [2024-10-14 14:28:53.435249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.960 [2024-10-14 14:28:53.435257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.960 [2024-10-14 14:28:53.435264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.960 [2024-10-14 14:28:53.435270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.960 [2024-10-14 14:28:53.437081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.960 [2024-10-14 14:28:53.437317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.960 [2024-10-14 14:28:53.437318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.960 [2024-10-14 14:28:53.437096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.960 [2024-10-14 14:28:53.492219] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:12.960 [2024-10-14 14:28:53.492438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:12.960 [2024-10-14 14:28:53.493537] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:12.960 [2024-10-14 14:28:53.493881] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:12.960 [2024-10-14 14:28:53.493975] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:13.551 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:13.551 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:13.551 14:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:14.492 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:14.753 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:14.753 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:14.753 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:14.753 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:14.753 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:15.014 Malloc1 00:15:15.014 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:15.014 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:15.274 14:28:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:15.535 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:15.535 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:15.535 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:15.796 Malloc2 00:15:15.796 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:15.796 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:16.057 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:16.318 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:16.318 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3349234 00:15:16.318 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3349234 ']' 00:15:16.318 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3349234 00:15:16.318 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:16.318 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:16.318 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3349234 00:15:16.318 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:16.318 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:16.318 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3349234' 00:15:16.318 killing process with pid 3349234 00:15:16.318 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3349234 00:15:16.318 14:28:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3349234 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:16.579 00:15:16.579 real 0m51.229s 00:15:16.579 user 3m16.191s 00:15:16.579 sys 0m2.808s 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:16.579 ************************************ 00:15:16.579 END TEST nvmf_vfio_user 00:15:16.579 ************************************ 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:16.579 ************************************ 00:15:16.579 START TEST nvmf_vfio_user_nvme_compliance 00:15:16.579 ************************************ 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:16.579 * Looking for test storage... 00:15:16.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:16.579 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:16.841 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:16.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.842 --rc genhtml_branch_coverage=1 00:15:16.842 --rc genhtml_function_coverage=1 00:15:16.842 --rc genhtml_legend=1 00:15:16.842 --rc geninfo_all_blocks=1 00:15:16.842 --rc geninfo_unexecuted_blocks=1 00:15:16.842 00:15:16.842 ' 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:16.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.842 --rc genhtml_branch_coverage=1 00:15:16.842 --rc genhtml_function_coverage=1 00:15:16.842 --rc genhtml_legend=1 00:15:16.842 --rc geninfo_all_blocks=1 00:15:16.842 --rc geninfo_unexecuted_blocks=1 00:15:16.842 00:15:16.842 ' 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:16.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.842 --rc genhtml_branch_coverage=1 00:15:16.842 --rc genhtml_function_coverage=1 00:15:16.842 --rc genhtml_legend=1 00:15:16.842 --rc geninfo_all_blocks=1 00:15:16.842 --rc geninfo_unexecuted_blocks=1 00:15:16.842 00:15:16.842 ' 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:16.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.842 --rc genhtml_branch_coverage=1 00:15:16.842 --rc genhtml_function_coverage=1 00:15:16.842 --rc genhtml_legend=1 00:15:16.842 --rc geninfo_all_blocks=1 00:15:16.842 --rc geninfo_unexecuted_blocks=1 00:15:16.842 00:15:16.842 ' 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:16.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3350182 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3350182' 00:15:16.842 Process pid: 3350182 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3350182 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 3350182 ']' 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:16.842 14:28:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.842 [2024-10-14 14:28:57.428011] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:15:16.842 [2024-10-14 14:28:57.428118] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.842 [2024-10-14 14:28:57.494672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:16.842 [2024-10-14 14:28:57.537194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.843 [2024-10-14 14:28:57.537245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.843 [2024-10-14 14:28:57.537257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.843 [2024-10-14 14:28:57.537264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.843 [2024-10-14 14:28:57.537270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.843 [2024-10-14 14:28:57.538895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.843 [2024-10-14 14:28:57.539034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.843 [2024-10-14 14:28:57.539036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.783 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.783 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:17.783 14:28:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:18.725 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:18.725 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:18.725 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:18.725 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.725 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:18.725 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.725 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:18.726 malloc0 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.726 14:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:18.726 00:15:18.726 00:15:18.726 CUnit - A unit testing framework for C - Version 2.1-3 00:15:18.726 http://cunit.sourceforge.net/ 00:15:18.726 00:15:18.726 00:15:18.726 Suite: nvme_compliance 00:15:18.987 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-14 14:28:59.488508] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.987 [2024-10-14 14:28:59.489864] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:18.987 [2024-10-14 14:28:59.489875] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:18.987 [2024-10-14 14:28:59.489879] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:18.987 [2024-10-14 14:28:59.491527] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.987 passed 00:15:18.987 Test: admin_identify_ctrlr_verify_fused ...[2024-10-14 14:28:59.589131] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.987 [2024-10-14 14:28:59.593152] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.987 passed 00:15:18.987 Test: admin_identify_ns ...[2024-10-14 14:28:59.692676] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.248 [2024-10-14 14:28:59.751152] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:19.248 [2024-10-14 14:28:59.760074] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:19.248 [2024-10-14 14:28:59.781190] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.248 passed 00:15:19.248 Test: admin_get_features_mandatory_features ...[2024-10-14 14:28:59.873808] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.248 [2024-10-14 14:28:59.876830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.248 passed 00:15:19.248 Test: admin_get_features_optional_features ...[2024-10-14 14:28:59.970396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.248 [2024-10-14 14:28:59.973415] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.509 passed 00:15:19.509 Test: admin_set_features_number_of_queues ...[2024-10-14 14:29:00.069654] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.509 [2024-10-14 14:29:00.174196] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.509 passed 00:15:19.770 Test: admin_get_log_page_mandatory_logs ...[2024-10-14 14:29:00.268186] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.770 [2024-10-14 14:29:00.271208] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.770 passed 00:15:19.770 Test: admin_get_log_page_with_lpo ...[2024-10-14 14:29:00.365698] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.770 [2024-10-14 14:29:00.433072] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:19.770 [2024-10-14 14:29:00.446118] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.770 passed 00:15:20.031 Test: fabric_property_get ...[2024-10-14 14:29:00.543214] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.031 [2024-10-14 14:29:00.544453] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:20.031 [2024-10-14 14:29:00.546233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.031 passed 00:15:20.031 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-14 14:29:00.641020] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.031 [2024-10-14 14:29:00.642280] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:20.031 [2024-10-14 14:29:00.644043] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.031 passed 00:15:20.031 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-14 14:29:00.737155] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.292 [2024-10-14 14:29:00.823071] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:20.292 [2024-10-14 14:29:00.839068] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:20.292 [2024-10-14 14:29:00.844171] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.292 passed 00:15:20.292 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-14 14:29:00.936775] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.292 [2024-10-14 14:29:00.938019] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:20.292 [2024-10-14 14:29:00.939794] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.292 passed 00:15:20.553 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-14 14:29:01.031317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.553 [2024-10-14 14:29:01.106072] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:20.553 [2024-10-14 14:29:01.130070] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:20.553 [2024-10-14 14:29:01.135146] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.553 passed 00:15:20.553 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-14 14:29:01.231195] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.553 [2024-10-14 14:29:01.232446] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:20.553 [2024-10-14 14:29:01.232466] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:20.553 [2024-10-14 14:29:01.234212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.553 passed 00:15:20.813 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-14 14:29:01.326307] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.813 [2024-10-14 14:29:01.418068] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:20.813 [2024-10-14 14:29:01.426070] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:20.813 [2024-10-14 14:29:01.434072] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:20.813 [2024-10-14 14:29:01.442074] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:20.813 [2024-10-14 14:29:01.474174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:20.813 passed 00:15:21.073 Test: admin_create_io_sq_verify_pc ...[2024-10-14 14:29:01.563774] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.073 [2024-10-14 14:29:01.579076] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:21.073 [2024-10-14 14:29:01.596930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.073 passed 00:15:21.073 Test: admin_create_io_qp_max_qps ...[2024-10-14 14:29:01.690459] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.458 [2024-10-14 14:29:02.810073] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:22.719 [2024-10-14 14:29:03.193430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.719 passed 00:15:22.719 Test: admin_create_io_sq_shared_cq ...[2024-10-14 14:29:03.291627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.719 [2024-10-14 14:29:03.423069] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:22.980 [2024-10-14 14:29:03.460147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.980 passed 00:15:22.980 00:15:22.980 Run Summary: Type Total Ran Passed Failed Inactive 00:15:22.980 suites 1 1 n/a 0 0 00:15:22.980 tests 18 18 18 0 0 00:15:22.980 asserts 360 360 360 0 n/a 00:15:22.980 00:15:22.980 Elapsed time = 1.671 seconds 00:15:22.980 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3350182 00:15:22.980 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 3350182 ']' 00:15:22.980 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 3350182 00:15:22.980 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:22.980 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:22.980 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3350182 00:15:22.980 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:22.980 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:22.980 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3350182' 00:15:22.980 killing process with pid 3350182 00:15:22.980 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 3350182 00:15:22.980 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 3350182 00:15:22.980 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:23.242 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:23.242 00:15:23.242 real 0m6.582s 00:15:23.242 user 0m18.697s 00:15:23.242 sys 0m0.554s 00:15:23.242 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:23.242 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:23.242 ************************************ 00:15:23.242 END TEST nvmf_vfio_user_nvme_compliance 00:15:23.242 ************************************ 00:15:23.242 14:29:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:23.242 14:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:23.242 14:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:23.242 14:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.242 ************************************ 00:15:23.242 START TEST nvmf_vfio_user_fuzz 00:15:23.242 ************************************ 00:15:23.242 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:23.242 * Looking for test storage... 00:15:23.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:23.242 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:23.242 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:23.242 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:23.242 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:23.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.503 --rc genhtml_branch_coverage=1 00:15:23.503 --rc genhtml_function_coverage=1 00:15:23.503 --rc genhtml_legend=1 00:15:23.503 --rc geninfo_all_blocks=1 00:15:23.503 --rc geninfo_unexecuted_blocks=1 00:15:23.503 00:15:23.503 ' 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:23.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.503 --rc genhtml_branch_coverage=1 00:15:23.503 --rc genhtml_function_coverage=1 00:15:23.503 --rc genhtml_legend=1 00:15:23.503 --rc geninfo_all_blocks=1 00:15:23.503 --rc geninfo_unexecuted_blocks=1 00:15:23.503 00:15:23.503 ' 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:23.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.503 --rc genhtml_branch_coverage=1 00:15:23.503 --rc genhtml_function_coverage=1 00:15:23.503 --rc genhtml_legend=1 00:15:23.503 --rc geninfo_all_blocks=1 00:15:23.503 --rc geninfo_unexecuted_blocks=1 00:15:23.503 00:15:23.503 ' 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:23.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.503 --rc genhtml_branch_coverage=1 00:15:23.503 --rc genhtml_function_coverage=1 00:15:23.503 --rc genhtml_legend=1 00:15:23.503 --rc geninfo_all_blocks=1 00:15:23.503 --rc geninfo_unexecuted_blocks=1 00:15:23.503 00:15:23.503 ' 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.503 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.504 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.504 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.504 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.504 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.504 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.504 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.504 14:29:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:23.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3351583 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3351583' 00:15:23.504 Process pid: 3351583 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3351583 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3351583 ']' 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:23.504 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:23.764 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.764 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:23.764 14:29:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.705 malloc0 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:24.705 14:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:56.840 Fuzzing completed. Shutting down the fuzz application 00:15:56.840 00:15:56.841 Dumping successful admin opcodes: 00:15:56.841 8, 9, 10, 24, 00:15:56.841 Dumping successful io opcodes: 00:15:56.841 0, 00:15:56.841 NS: 0x20000081ef00 I/O qp, Total commands completed: 1178224, total successful commands: 4628, random_seed: 368385600 00:15:56.841 NS: 0x20000081ef00 admin qp, Total commands completed: 148136, total successful commands: 1194, random_seed: 727341248 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3351583 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3351583 ']' 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 3351583 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3351583 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3351583' 00:15:56.841 killing process with pid 3351583 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 3351583 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 3351583 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:56.841 00:15:56.841 real 0m32.134s 00:15:56.841 user 0m37.071s 00:15:56.841 sys 0m23.903s 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.841 ************************************ 00:15:56.841 END TEST nvmf_vfio_user_fuzz 00:15:56.841 ************************************ 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:56.841 ************************************ 00:15:56.841 START TEST nvmf_auth_target 00:15:56.841 ************************************ 00:15:56.841 14:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:56.841 * Looking for test storage... 00:15:56.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:56.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.841 --rc genhtml_branch_coverage=1 00:15:56.841 --rc genhtml_function_coverage=1 00:15:56.841 --rc genhtml_legend=1 00:15:56.841 --rc geninfo_all_blocks=1 00:15:56.841 --rc geninfo_unexecuted_blocks=1 00:15:56.841 00:15:56.841 ' 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:56.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.841 --rc genhtml_branch_coverage=1 00:15:56.841 --rc genhtml_function_coverage=1 00:15:56.841 --rc genhtml_legend=1 00:15:56.841 --rc geninfo_all_blocks=1 00:15:56.841 --rc geninfo_unexecuted_blocks=1 00:15:56.841 00:15:56.841 ' 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:56.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.841 --rc genhtml_branch_coverage=1 00:15:56.841 --rc genhtml_function_coverage=1 00:15:56.841 --rc genhtml_legend=1 00:15:56.841 --rc geninfo_all_blocks=1 00:15:56.841 --rc geninfo_unexecuted_blocks=1 00:15:56.841 00:15:56.841 ' 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:56.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.841 --rc genhtml_branch_coverage=1 00:15:56.841 --rc genhtml_function_coverage=1 00:15:56.841 --rc genhtml_legend=1 00:15:56.841 --rc geninfo_all_blocks=1 00:15:56.841 --rc geninfo_unexecuted_blocks=1 00:15:56.841 00:15:56.841 ' 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.841 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:56.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:56.842 14:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:03.435 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:03.435 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:03.435 Found net devices under 0000:31:00.0: cvl_0_0 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:03.435 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:03.436 Found net devices under 0000:31:00.1: cvl_0_1 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:03.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:16:03.436 00:16:03.436 --- 10.0.0.2 ping statistics --- 00:16:03.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.436 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:16:03.436 00:16:03.436 --- 10.0.0.1 ping statistics --- 00:16:03.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.436 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3361489 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3361489 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3361489 ']' 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:03.436 14:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3361661 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=cf92019c67a07fee049862823eaf461e69f7da492a34e34a 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.gO9 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key cf92019c67a07fee049862823eaf461e69f7da492a34e34a 0 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 cf92019c67a07fee049862823eaf461e69f7da492a34e34a 0 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=cf92019c67a07fee049862823eaf461e69f7da492a34e34a 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.gO9 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.gO9 00:16:04.006 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.gO9 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=392ab2d3483e1da641c5b6a393f15c64717ac91aafaef8a90ab73faab7cd9fcf 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.RaW 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 392ab2d3483e1da641c5b6a393f15c64717ac91aafaef8a90ab73faab7cd9fcf 3 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 392ab2d3483e1da641c5b6a393f15c64717ac91aafaef8a90ab73faab7cd9fcf 3 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=392ab2d3483e1da641c5b6a393f15c64717ac91aafaef8a90ab73faab7cd9fcf 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.RaW 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.RaW 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.RaW 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:04.267 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=0cf21c30bea57ada8848b2ef5559af65 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.iyo 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 0cf21c30bea57ada8848b2ef5559af65 1 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 0cf21c30bea57ada8848b2ef5559af65 1 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=0cf21c30bea57ada8848b2ef5559af65 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.iyo 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.iyo 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.iyo 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=9ed70bf1941868a5620ead59edfd92d3e4344521bf906b63 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.E7i 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 9ed70bf1941868a5620ead59edfd92d3e4344521bf906b63 2 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 9ed70bf1941868a5620ead59edfd92d3e4344521bf906b63 2 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=9ed70bf1941868a5620ead59edfd92d3e4344521bf906b63 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.E7i 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.E7i 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.E7i 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=c6f57dcebea88efab03035bc44b35fd9c54640d00dfd6faa 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.uWk 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key c6f57dcebea88efab03035bc44b35fd9c54640d00dfd6faa 2 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 c6f57dcebea88efab03035bc44b35fd9c54640d00dfd6faa 2 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=c6f57dcebea88efab03035bc44b35fd9c54640d00dfd6faa 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.uWk 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.uWk 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.uWk 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=204c075cc0aec0530262c77781bc3250 00:16:04.268 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:04.531 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.7i4 00:16:04.531 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 204c075cc0aec0530262c77781bc3250 1 00:16:04.532 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 204c075cc0aec0530262c77781bc3250 1 00:16:04.532 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:04.532 14:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=204c075cc0aec0530262c77781bc3250 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.7i4 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.7i4 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.7i4 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=0c59cb5a9a4d21530bf7cf7bd507e0d28d9485b6798c73f80b034dd2b4405229 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.6ZA 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 0c59cb5a9a4d21530bf7cf7bd507e0d28d9485b6798c73f80b034dd2b4405229 3 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 0c59cb5a9a4d21530bf7cf7bd507e0d28d9485b6798c73f80b034dd2b4405229 3 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=0c59cb5a9a4d21530bf7cf7bd507e0d28d9485b6798c73f80b034dd2b4405229 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.6ZA 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.6ZA 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.6ZA 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3361489 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3361489 ']' 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:04.532 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3361661 /var/tmp/host.sock 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3361661 ']' 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:04.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gO9 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.gO9 00:16:04.793 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.gO9 00:16:05.053 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.RaW ]] 00:16:05.053 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RaW 00:16:05.053 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.053 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.053 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.053 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RaW 00:16:05.053 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RaW 00:16:05.315 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:05.315 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.iyo 00:16:05.315 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.315 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.315 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.315 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.iyo 00:16:05.315 14:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.iyo 00:16:05.315 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.E7i ]] 00:16:05.315 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.E7i 00:16:05.315 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.315 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.575 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.575 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.E7i 00:16:05.575 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.E7i 00:16:05.575 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:05.575 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.uWk 00:16:05.575 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.575 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.575 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.575 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.uWk 00:16:05.575 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.uWk 00:16:05.836 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.7i4 ]] 00:16:05.836 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7i4 00:16:05.836 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.836 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.836 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.836 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7i4 00:16:05.836 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7i4 00:16:05.836 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:05.836 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.6ZA 00:16:05.836 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.836 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.836 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.836 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.6ZA 00:16:05.836 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.6ZA 00:16:06.097 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:06.097 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:06.097 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.097 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.097 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:06.097 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:06.357 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:06.357 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.357 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.357 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:06.357 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.357 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.357 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.357 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.357 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.357 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.357 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.357 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.357 14:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.618 00:16:06.618 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.618 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.618 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.618 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.618 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.618 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.618 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.618 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.618 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.618 { 00:16:06.618 "cntlid": 1, 00:16:06.618 "qid": 0, 00:16:06.618 "state": "enabled", 00:16:06.618 "thread": "nvmf_tgt_poll_group_000", 00:16:06.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:06.618 "listen_address": { 00:16:06.618 "trtype": "TCP", 00:16:06.618 "adrfam": "IPv4", 00:16:06.618 "traddr": "10.0.0.2", 00:16:06.618 "trsvcid": "4420" 00:16:06.618 }, 00:16:06.618 "peer_address": { 00:16:06.618 "trtype": "TCP", 00:16:06.618 "adrfam": "IPv4", 00:16:06.618 "traddr": "10.0.0.1", 00:16:06.618 "trsvcid": "38840" 00:16:06.618 }, 00:16:06.618 "auth": { 00:16:06.618 "state": "completed", 00:16:06.618 "digest": "sha256", 00:16:06.618 "dhgroup": "null" 00:16:06.618 } 00:16:06.618 } 00:16:06.618 ]' 00:16:06.618 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.878 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.878 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.878 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:06.878 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.878 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.878 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.878 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.139 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:07.139 14:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:07.709 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.709 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:07.709 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.709 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.709 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.709 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.709 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.709 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.970 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:07.970 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.970 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.970 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:07.970 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:07.970 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.970 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.970 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.970 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.970 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.970 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.970 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.970 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.230 00:16:08.230 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.230 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.230 14:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.490 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.490 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.490 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.490 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.490 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.490 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.490 { 00:16:08.490 "cntlid": 3, 00:16:08.490 "qid": 0, 00:16:08.490 "state": "enabled", 00:16:08.490 "thread": "nvmf_tgt_poll_group_000", 00:16:08.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:08.491 "listen_address": { 00:16:08.491 "trtype": "TCP", 00:16:08.491 "adrfam": "IPv4", 00:16:08.491 "traddr": "10.0.0.2", 00:16:08.491 "trsvcid": "4420" 00:16:08.491 }, 00:16:08.491 "peer_address": { 00:16:08.491 "trtype": "TCP", 00:16:08.491 "adrfam": "IPv4", 00:16:08.491 "traddr": "10.0.0.1", 00:16:08.491 "trsvcid": "38856" 00:16:08.491 }, 00:16:08.491 "auth": { 00:16:08.491 "state": "completed", 00:16:08.491 "digest": "sha256", 00:16:08.491 "dhgroup": "null" 00:16:08.491 } 00:16:08.491 } 00:16:08.491 ]' 00:16:08.491 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.491 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.491 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.491 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:08.491 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.491 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.491 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.491 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.750 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:08.750 14:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.690 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.950 00:16:09.950 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.950 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.950 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.210 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.210 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.210 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.210 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.210 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.211 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.211 { 00:16:10.211 "cntlid": 5, 00:16:10.211 "qid": 0, 00:16:10.211 "state": "enabled", 00:16:10.211 "thread": "nvmf_tgt_poll_group_000", 00:16:10.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:10.211 "listen_address": { 00:16:10.211 "trtype": "TCP", 00:16:10.211 "adrfam": "IPv4", 00:16:10.211 "traddr": "10.0.0.2", 00:16:10.211 "trsvcid": "4420" 00:16:10.211 }, 00:16:10.211 "peer_address": { 00:16:10.211 "trtype": "TCP", 00:16:10.211 "adrfam": "IPv4", 00:16:10.211 "traddr": "10.0.0.1", 00:16:10.211 "trsvcid": "38880" 00:16:10.211 }, 00:16:10.211 "auth": { 00:16:10.211 "state": "completed", 00:16:10.211 "digest": "sha256", 00:16:10.211 "dhgroup": "null" 00:16:10.211 } 00:16:10.211 } 00:16:10.211 ]' 00:16:10.211 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.211 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.211 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.211 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:10.211 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.211 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.211 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.211 14:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.471 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:16:10.471 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:16:11.441 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.441 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:11.441 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.441 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.441 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.441 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.441 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.441 14:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.441 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:11.441 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.441 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:11.442 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:11.442 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:11.442 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.442 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:11.442 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.442 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.442 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.442 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:11.442 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.442 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.760 00:16:11.760 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.760 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.760 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.760 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.760 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.760 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.760 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.053 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.053 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.053 { 00:16:12.053 "cntlid": 7, 00:16:12.053 "qid": 0, 00:16:12.053 "state": "enabled", 00:16:12.053 "thread": "nvmf_tgt_poll_group_000", 00:16:12.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:12.053 "listen_address": { 00:16:12.053 "trtype": "TCP", 00:16:12.053 "adrfam": "IPv4", 00:16:12.053 "traddr": "10.0.0.2", 00:16:12.053 "trsvcid": "4420" 00:16:12.053 }, 00:16:12.053 "peer_address": { 00:16:12.053 "trtype": "TCP", 00:16:12.053 "adrfam": "IPv4", 00:16:12.053 "traddr": "10.0.0.1", 00:16:12.053 "trsvcid": "52686" 00:16:12.053 }, 00:16:12.053 "auth": { 00:16:12.053 "state": "completed", 00:16:12.053 "digest": "sha256", 00:16:12.053 "dhgroup": "null" 00:16:12.053 } 00:16:12.053 } 00:16:12.053 ]' 00:16:12.053 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.053 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.053 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.053 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:12.053 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.053 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.053 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.053 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.334 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:16:12.334 14:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:16:12.903 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.903 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:12.903 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.903 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.903 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.903 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.903 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.903 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:12.903 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:13.164 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:13.164 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.164 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.164 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:13.164 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:13.164 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.164 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.164 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.164 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.164 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.164 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.164 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.164 14:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.424 00:16:13.425 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.425 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.425 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.685 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.685 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.685 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.685 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.685 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.685 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.685 { 00:16:13.685 "cntlid": 9, 00:16:13.685 "qid": 0, 00:16:13.685 "state": "enabled", 00:16:13.685 "thread": "nvmf_tgt_poll_group_000", 00:16:13.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:13.685 "listen_address": { 00:16:13.685 "trtype": "TCP", 00:16:13.685 "adrfam": "IPv4", 00:16:13.685 "traddr": "10.0.0.2", 00:16:13.685 "trsvcid": "4420" 00:16:13.685 }, 00:16:13.685 "peer_address": { 00:16:13.685 "trtype": "TCP", 00:16:13.685 "adrfam": "IPv4", 00:16:13.685 "traddr": "10.0.0.1", 00:16:13.685 "trsvcid": "52716" 00:16:13.685 }, 00:16:13.685 "auth": { 00:16:13.685 "state": "completed", 00:16:13.685 "digest": "sha256", 00:16:13.685 "dhgroup": "ffdhe2048" 00:16:13.685 } 00:16:13.685 } 00:16:13.685 ]' 00:16:13.685 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.685 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.685 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.685 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:13.685 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.685 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.685 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.685 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.946 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:13.946 14:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:14.517 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.777 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.038 00:16:15.038 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.038 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.038 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.298 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.298 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.298 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.298 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.298 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.298 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.298 { 00:16:15.298 "cntlid": 11, 00:16:15.298 "qid": 0, 00:16:15.298 "state": "enabled", 00:16:15.298 "thread": "nvmf_tgt_poll_group_000", 00:16:15.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:15.298 "listen_address": { 00:16:15.298 "trtype": "TCP", 00:16:15.298 "adrfam": "IPv4", 00:16:15.298 "traddr": "10.0.0.2", 00:16:15.298 "trsvcid": "4420" 00:16:15.298 }, 00:16:15.298 "peer_address": { 00:16:15.298 "trtype": "TCP", 00:16:15.298 "adrfam": "IPv4", 00:16:15.298 "traddr": "10.0.0.1", 00:16:15.298 "trsvcid": "52742" 00:16:15.298 }, 00:16:15.298 "auth": { 00:16:15.298 "state": "completed", 00:16:15.298 "digest": "sha256", 00:16:15.298 "dhgroup": "ffdhe2048" 00:16:15.298 } 00:16:15.298 } 00:16:15.298 ]' 00:16:15.298 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.298 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.298 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.298 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.298 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.298 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.298 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.298 14:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.558 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:15.558 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:16.498 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.499 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:16.499 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.499 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.499 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.499 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.499 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:16.499 14:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:16.499 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:16.499 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.499 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.499 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:16.499 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.499 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.499 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.499 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.499 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.499 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.499 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.499 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.499 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.759 00:16:16.759 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.759 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.759 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.021 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.021 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.021 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.021 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.021 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.021 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.021 { 00:16:17.021 "cntlid": 13, 00:16:17.021 "qid": 0, 00:16:17.021 "state": "enabled", 00:16:17.021 "thread": "nvmf_tgt_poll_group_000", 00:16:17.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:17.021 "listen_address": { 00:16:17.021 "trtype": "TCP", 00:16:17.021 "adrfam": "IPv4", 00:16:17.021 "traddr": "10.0.0.2", 00:16:17.021 "trsvcid": "4420" 00:16:17.021 }, 00:16:17.021 "peer_address": { 00:16:17.021 "trtype": "TCP", 00:16:17.021 "adrfam": "IPv4", 00:16:17.021 "traddr": "10.0.0.1", 00:16:17.021 "trsvcid": "52760" 00:16:17.021 }, 00:16:17.021 "auth": { 00:16:17.021 "state": "completed", 00:16:17.021 "digest": "sha256", 00:16:17.021 "dhgroup": "ffdhe2048" 00:16:17.021 } 00:16:17.021 } 00:16:17.021 ]' 00:16:17.022 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.022 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.022 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.022 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.022 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.022 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.022 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.022 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.282 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:16:17.282 14:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.225 14:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.486 00:16:18.486 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.486 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.486 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.746 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.746 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.746 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.746 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.746 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.746 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.746 { 00:16:18.746 "cntlid": 15, 00:16:18.746 "qid": 0, 00:16:18.746 "state": "enabled", 00:16:18.746 "thread": "nvmf_tgt_poll_group_000", 00:16:18.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:18.746 "listen_address": { 00:16:18.746 "trtype": "TCP", 00:16:18.746 "adrfam": "IPv4", 00:16:18.746 "traddr": "10.0.0.2", 00:16:18.746 "trsvcid": "4420" 00:16:18.746 }, 00:16:18.746 "peer_address": { 00:16:18.746 "trtype": "TCP", 00:16:18.746 "adrfam": "IPv4", 00:16:18.746 "traddr": "10.0.0.1", 00:16:18.746 "trsvcid": "52780" 00:16:18.746 }, 00:16:18.746 "auth": { 00:16:18.746 "state": "completed", 00:16:18.746 "digest": "sha256", 00:16:18.746 "dhgroup": "ffdhe2048" 00:16:18.746 } 00:16:18.746 } 00:16:18.746 ]' 00:16:18.746 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.746 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.746 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.746 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.746 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.746 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.746 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.746 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.007 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:16:19.007 14:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.947 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.208 00:16:20.208 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.208 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.208 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.469 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.469 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.469 14:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.469 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.469 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.469 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.470 { 00:16:20.470 "cntlid": 17, 00:16:20.470 "qid": 0, 00:16:20.470 "state": "enabled", 00:16:20.470 "thread": "nvmf_tgt_poll_group_000", 00:16:20.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:20.470 "listen_address": { 00:16:20.470 "trtype": "TCP", 00:16:20.470 "adrfam": "IPv4", 00:16:20.470 "traddr": "10.0.0.2", 00:16:20.470 "trsvcid": "4420" 00:16:20.470 }, 00:16:20.470 "peer_address": { 00:16:20.470 "trtype": "TCP", 00:16:20.470 "adrfam": "IPv4", 00:16:20.470 "traddr": "10.0.0.1", 00:16:20.470 "trsvcid": "50518" 00:16:20.470 }, 00:16:20.470 "auth": { 00:16:20.470 "state": "completed", 00:16:20.470 "digest": "sha256", 00:16:20.470 "dhgroup": "ffdhe3072" 00:16:20.470 } 00:16:20.470 } 00:16:20.470 ]' 00:16:20.470 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.470 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.470 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.470 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:20.470 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.470 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.470 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.470 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.730 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:20.730 14:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.670 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.930 00:16:21.930 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.930 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.931 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.191 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.191 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.191 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.191 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.191 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.191 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.191 { 00:16:22.191 "cntlid": 19, 00:16:22.191 "qid": 0, 00:16:22.191 "state": "enabled", 00:16:22.191 "thread": "nvmf_tgt_poll_group_000", 00:16:22.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:22.191 "listen_address": { 00:16:22.191 "trtype": "TCP", 00:16:22.191 "adrfam": "IPv4", 00:16:22.191 "traddr": "10.0.0.2", 00:16:22.191 "trsvcid": "4420" 00:16:22.191 }, 00:16:22.191 "peer_address": { 00:16:22.191 "trtype": "TCP", 00:16:22.191 "adrfam": "IPv4", 00:16:22.191 "traddr": "10.0.0.1", 00:16:22.191 "trsvcid": "50538" 00:16:22.191 }, 00:16:22.191 "auth": { 00:16:22.191 "state": "completed", 00:16:22.191 "digest": "sha256", 00:16:22.191 "dhgroup": "ffdhe3072" 00:16:22.191 } 00:16:22.191 } 00:16:22.191 ]' 00:16:22.191 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.191 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.191 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.191 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.191 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.191 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.191 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.191 14:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.451 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:22.451 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.393 14:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.653 00:16:23.653 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.653 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.653 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.913 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.913 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.914 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.914 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.914 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.914 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.914 { 00:16:23.914 "cntlid": 21, 00:16:23.914 "qid": 0, 00:16:23.914 "state": "enabled", 00:16:23.914 "thread": "nvmf_tgt_poll_group_000", 00:16:23.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:23.914 "listen_address": { 00:16:23.914 "trtype": "TCP", 00:16:23.914 "adrfam": "IPv4", 00:16:23.914 "traddr": "10.0.0.2", 00:16:23.914 "trsvcid": "4420" 00:16:23.914 }, 00:16:23.914 "peer_address": { 00:16:23.914 "trtype": "TCP", 00:16:23.914 "adrfam": "IPv4", 00:16:23.914 "traddr": "10.0.0.1", 00:16:23.914 "trsvcid": "50564" 00:16:23.914 }, 00:16:23.914 "auth": { 00:16:23.914 "state": "completed", 00:16:23.914 "digest": "sha256", 00:16:23.914 "dhgroup": "ffdhe3072" 00:16:23.914 } 00:16:23.914 } 00:16:23.914 ]' 00:16:23.914 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.914 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.914 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.914 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.914 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.914 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.914 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.914 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.173 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:16:24.173 14:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.112 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.372 00:16:25.372 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.372 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.372 14:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.633 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.633 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.633 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.633 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.633 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.633 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.633 { 00:16:25.633 "cntlid": 23, 00:16:25.633 "qid": 0, 00:16:25.633 "state": "enabled", 00:16:25.633 "thread": "nvmf_tgt_poll_group_000", 00:16:25.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:25.633 "listen_address": { 00:16:25.633 "trtype": "TCP", 00:16:25.633 "adrfam": "IPv4", 00:16:25.633 "traddr": "10.0.0.2", 00:16:25.633 "trsvcid": "4420" 00:16:25.633 }, 00:16:25.633 "peer_address": { 00:16:25.633 "trtype": "TCP", 00:16:25.633 "adrfam": "IPv4", 00:16:25.633 "traddr": "10.0.0.1", 00:16:25.633 "trsvcid": "50596" 00:16:25.633 }, 00:16:25.633 "auth": { 00:16:25.633 "state": "completed", 00:16:25.633 "digest": "sha256", 00:16:25.633 "dhgroup": "ffdhe3072" 00:16:25.633 } 00:16:25.633 } 00:16:25.633 ]' 00:16:25.633 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.633 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.633 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.633 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.633 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.633 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.633 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.633 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.894 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:16:25.894 14:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.834 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.094 00:16:27.094 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.094 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.094 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.355 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.355 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.355 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.355 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.355 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.355 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.355 { 00:16:27.355 "cntlid": 25, 00:16:27.355 "qid": 0, 00:16:27.355 "state": "enabled", 00:16:27.355 "thread": "nvmf_tgt_poll_group_000", 00:16:27.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:27.355 "listen_address": { 00:16:27.355 "trtype": "TCP", 00:16:27.355 "adrfam": "IPv4", 00:16:27.355 "traddr": "10.0.0.2", 00:16:27.355 "trsvcid": "4420" 00:16:27.355 }, 00:16:27.355 "peer_address": { 00:16:27.355 "trtype": "TCP", 00:16:27.355 "adrfam": "IPv4", 00:16:27.355 "traddr": "10.0.0.1", 00:16:27.355 "trsvcid": "50622" 00:16:27.355 }, 00:16:27.355 "auth": { 00:16:27.355 "state": "completed", 00:16:27.355 "digest": "sha256", 00:16:27.355 "dhgroup": "ffdhe4096" 00:16:27.355 } 00:16:27.355 } 00:16:27.355 ]' 00:16:27.355 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.355 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.355 14:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.355 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:27.355 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.355 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.355 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.355 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.616 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:27.616 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:28.556 14:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.556 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.817 00:16:28.817 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.817 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.817 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.078 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.078 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.078 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.078 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.078 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.078 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.078 { 00:16:29.078 "cntlid": 27, 00:16:29.078 "qid": 0, 00:16:29.078 "state": "enabled", 00:16:29.078 "thread": "nvmf_tgt_poll_group_000", 00:16:29.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:29.078 "listen_address": { 00:16:29.078 "trtype": "TCP", 00:16:29.078 "adrfam": "IPv4", 00:16:29.078 "traddr": "10.0.0.2", 00:16:29.078 "trsvcid": "4420" 00:16:29.078 }, 00:16:29.078 "peer_address": { 00:16:29.078 "trtype": "TCP", 00:16:29.078 "adrfam": "IPv4", 00:16:29.078 "traddr": "10.0.0.1", 00:16:29.078 "trsvcid": "50644" 00:16:29.078 }, 00:16:29.078 "auth": { 00:16:29.078 "state": "completed", 00:16:29.078 "digest": "sha256", 00:16:29.078 "dhgroup": "ffdhe4096" 00:16:29.078 } 00:16:29.078 } 00:16:29.078 ]' 00:16:29.078 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.078 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.078 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.078 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:29.078 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.078 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.078 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.078 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.339 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:29.339 14:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.279 14:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.540 00:16:30.540 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.540 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.540 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.800 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.800 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.800 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.800 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.800 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.800 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.800 { 00:16:30.800 "cntlid": 29, 00:16:30.800 "qid": 0, 00:16:30.800 "state": "enabled", 00:16:30.800 "thread": "nvmf_tgt_poll_group_000", 00:16:30.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:30.800 "listen_address": { 00:16:30.800 "trtype": "TCP", 00:16:30.800 "adrfam": "IPv4", 00:16:30.800 "traddr": "10.0.0.2", 00:16:30.800 "trsvcid": "4420" 00:16:30.800 }, 00:16:30.800 "peer_address": { 00:16:30.800 "trtype": "TCP", 00:16:30.800 "adrfam": "IPv4", 00:16:30.800 "traddr": "10.0.0.1", 00:16:30.800 "trsvcid": "39738" 00:16:30.800 }, 00:16:30.800 "auth": { 00:16:30.800 "state": "completed", 00:16:30.800 "digest": "sha256", 00:16:30.800 "dhgroup": "ffdhe4096" 00:16:30.800 } 00:16:30.800 } 00:16:30.800 ]' 00:16:30.800 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.800 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.800 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.800 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.800 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.061 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.061 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.061 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.061 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:16:31.061 14:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:16:32.001 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.001 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:32.001 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.001 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.001 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.001 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.001 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:32.001 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:32.001 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:32.001 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.001 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.001 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:32.001 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.001 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.002 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:32.002 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.002 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.002 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.002 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.002 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.002 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.262 00:16:32.262 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.262 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.262 14:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.522 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.522 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.522 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.522 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.522 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.522 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.522 { 00:16:32.522 "cntlid": 31, 00:16:32.522 "qid": 0, 00:16:32.522 "state": "enabled", 00:16:32.522 "thread": "nvmf_tgt_poll_group_000", 00:16:32.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:32.522 "listen_address": { 00:16:32.522 "trtype": "TCP", 00:16:32.522 "adrfam": "IPv4", 00:16:32.522 "traddr": "10.0.0.2", 00:16:32.522 "trsvcid": "4420" 00:16:32.522 }, 00:16:32.522 "peer_address": { 00:16:32.522 "trtype": "TCP", 00:16:32.522 "adrfam": "IPv4", 00:16:32.522 "traddr": "10.0.0.1", 00:16:32.522 "trsvcid": "39770" 00:16:32.522 }, 00:16:32.522 "auth": { 00:16:32.522 "state": "completed", 00:16:32.522 "digest": "sha256", 00:16:32.522 "dhgroup": "ffdhe4096" 00:16:32.522 } 00:16:32.522 } 00:16:32.522 ]' 00:16:32.522 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.522 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.522 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.522 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.522 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.873 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.873 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.873 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.873 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:16:32.873 14:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:16:33.512 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.512 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:33.512 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.512 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.512 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.512 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.512 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.512 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.512 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.772 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:33.772 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.772 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.772 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:33.772 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:33.772 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.772 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.772 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.772 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.772 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.772 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.772 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.772 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.342 00:16:34.342 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.342 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.342 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.343 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.343 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.343 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.343 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.343 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.343 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.343 { 00:16:34.343 "cntlid": 33, 00:16:34.343 "qid": 0, 00:16:34.343 "state": "enabled", 00:16:34.343 "thread": "nvmf_tgt_poll_group_000", 00:16:34.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:34.343 "listen_address": { 00:16:34.343 "trtype": "TCP", 00:16:34.343 "adrfam": "IPv4", 00:16:34.343 "traddr": "10.0.0.2", 00:16:34.343 "trsvcid": "4420" 00:16:34.343 }, 00:16:34.343 "peer_address": { 00:16:34.343 "trtype": "TCP", 00:16:34.343 "adrfam": "IPv4", 00:16:34.343 "traddr": "10.0.0.1", 00:16:34.343 "trsvcid": "39806" 00:16:34.343 }, 00:16:34.343 "auth": { 00:16:34.343 "state": "completed", 00:16:34.343 "digest": "sha256", 00:16:34.343 "dhgroup": "ffdhe6144" 00:16:34.343 } 00:16:34.343 } 00:16:34.343 ]' 00:16:34.343 14:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.343 14:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.343 14:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.343 14:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:34.343 14:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.602 14:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.602 14:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.602 14:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.602 14:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:34.602 14:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.541 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.111 00:16:36.111 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.111 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.111 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.111 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.111 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.111 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.111 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.111 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.111 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.111 { 00:16:36.111 "cntlid": 35, 00:16:36.111 "qid": 0, 00:16:36.111 "state": "enabled", 00:16:36.111 "thread": "nvmf_tgt_poll_group_000", 00:16:36.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:36.111 "listen_address": { 00:16:36.111 "trtype": "TCP", 00:16:36.111 "adrfam": "IPv4", 00:16:36.111 "traddr": "10.0.0.2", 00:16:36.111 "trsvcid": "4420" 00:16:36.111 }, 00:16:36.111 "peer_address": { 00:16:36.111 "trtype": "TCP", 00:16:36.111 "adrfam": "IPv4", 00:16:36.111 "traddr": "10.0.0.1", 00:16:36.111 "trsvcid": "39840" 00:16:36.111 }, 00:16:36.111 "auth": { 00:16:36.111 "state": "completed", 00:16:36.111 "digest": "sha256", 00:16:36.111 "dhgroup": "ffdhe6144" 00:16:36.111 } 00:16:36.111 } 00:16:36.111 ]' 00:16:36.111 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.111 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.111 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.371 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.371 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.371 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.371 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.371 14:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.371 14:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:36.371 14:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:37.311 14:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.311 14:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:37.311 14:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.311 14:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.311 14:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.311 14:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.311 14:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:37.311 14:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:37.311 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:37.571 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.571 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.571 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:37.571 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:37.571 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.571 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.571 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.571 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.571 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.571 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.571 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.571 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.832 00:16:37.832 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.832 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.832 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.092 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.092 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.092 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.092 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.092 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.092 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.092 { 00:16:38.092 "cntlid": 37, 00:16:38.092 "qid": 0, 00:16:38.092 "state": "enabled", 00:16:38.092 "thread": "nvmf_tgt_poll_group_000", 00:16:38.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:38.092 "listen_address": { 00:16:38.092 "trtype": "TCP", 00:16:38.092 "adrfam": "IPv4", 00:16:38.092 "traddr": "10.0.0.2", 00:16:38.092 "trsvcid": "4420" 00:16:38.092 }, 00:16:38.092 "peer_address": { 00:16:38.092 "trtype": "TCP", 00:16:38.092 "adrfam": "IPv4", 00:16:38.092 "traddr": "10.0.0.1", 00:16:38.092 "trsvcid": "39866" 00:16:38.092 }, 00:16:38.092 "auth": { 00:16:38.092 "state": "completed", 00:16:38.092 "digest": "sha256", 00:16:38.092 "dhgroup": "ffdhe6144" 00:16:38.092 } 00:16:38.092 } 00:16:38.092 ]' 00:16:38.092 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.092 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.092 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.092 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.092 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.092 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.092 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.092 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.352 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:16:38.352 14:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.292 14:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.552 00:16:39.552 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.552 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.552 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.812 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.812 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.812 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.812 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.812 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.812 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.812 { 00:16:39.812 "cntlid": 39, 00:16:39.812 "qid": 0, 00:16:39.812 "state": "enabled", 00:16:39.812 "thread": "nvmf_tgt_poll_group_000", 00:16:39.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:39.812 "listen_address": { 00:16:39.812 "trtype": "TCP", 00:16:39.812 "adrfam": "IPv4", 00:16:39.812 "traddr": "10.0.0.2", 00:16:39.812 "trsvcid": "4420" 00:16:39.812 }, 00:16:39.812 "peer_address": { 00:16:39.812 "trtype": "TCP", 00:16:39.812 "adrfam": "IPv4", 00:16:39.812 "traddr": "10.0.0.1", 00:16:39.812 "trsvcid": "39886" 00:16:39.812 }, 00:16:39.812 "auth": { 00:16:39.812 "state": "completed", 00:16:39.812 "digest": "sha256", 00:16:39.812 "dhgroup": "ffdhe6144" 00:16:39.812 } 00:16:39.812 } 00:16:39.812 ]' 00:16:39.812 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.812 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.812 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.073 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.073 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.073 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.073 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.073 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.073 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:16:40.073 14:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.013 14:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.583 00:16:41.583 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.583 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.583 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.842 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.842 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.842 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.842 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.842 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.842 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.842 { 00:16:41.842 "cntlid": 41, 00:16:41.842 "qid": 0, 00:16:41.842 "state": "enabled", 00:16:41.842 "thread": "nvmf_tgt_poll_group_000", 00:16:41.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:41.842 "listen_address": { 00:16:41.842 "trtype": "TCP", 00:16:41.842 "adrfam": "IPv4", 00:16:41.842 "traddr": "10.0.0.2", 00:16:41.842 "trsvcid": "4420" 00:16:41.842 }, 00:16:41.842 "peer_address": { 00:16:41.842 "trtype": "TCP", 00:16:41.842 "adrfam": "IPv4", 00:16:41.842 "traddr": "10.0.0.1", 00:16:41.842 "trsvcid": "60404" 00:16:41.842 }, 00:16:41.842 "auth": { 00:16:41.842 "state": "completed", 00:16:41.842 "digest": "sha256", 00:16:41.842 "dhgroup": "ffdhe8192" 00:16:41.842 } 00:16:41.842 } 00:16:41.842 ]' 00:16:41.842 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.842 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.842 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.842 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.842 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.103 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.103 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.103 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.103 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:42.103 14:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.043 14:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.614 00:16:43.614 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.614 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.614 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.875 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.875 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.875 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.875 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.875 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.875 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.875 { 00:16:43.875 "cntlid": 43, 00:16:43.875 "qid": 0, 00:16:43.875 "state": "enabled", 00:16:43.875 "thread": "nvmf_tgt_poll_group_000", 00:16:43.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:43.875 "listen_address": { 00:16:43.875 "trtype": "TCP", 00:16:43.875 "adrfam": "IPv4", 00:16:43.875 "traddr": "10.0.0.2", 00:16:43.875 "trsvcid": "4420" 00:16:43.875 }, 00:16:43.875 "peer_address": { 00:16:43.875 "trtype": "TCP", 00:16:43.875 "adrfam": "IPv4", 00:16:43.875 "traddr": "10.0.0.1", 00:16:43.875 "trsvcid": "60436" 00:16:43.875 }, 00:16:43.875 "auth": { 00:16:43.875 "state": "completed", 00:16:43.875 "digest": "sha256", 00:16:43.875 "dhgroup": "ffdhe8192" 00:16:43.875 } 00:16:43.875 } 00:16:43.875 ]' 00:16:43.875 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.875 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.875 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.875 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.875 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.875 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.875 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.875 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.135 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:44.135 14:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.077 14:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.648 00:16:45.648 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.648 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.648 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.909 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.909 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.909 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.909 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.909 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.909 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.909 { 00:16:45.909 "cntlid": 45, 00:16:45.909 "qid": 0, 00:16:45.909 "state": "enabled", 00:16:45.909 "thread": "nvmf_tgt_poll_group_000", 00:16:45.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:45.909 "listen_address": { 00:16:45.909 "trtype": "TCP", 00:16:45.909 "adrfam": "IPv4", 00:16:45.909 "traddr": "10.0.0.2", 00:16:45.909 "trsvcid": "4420" 00:16:45.909 }, 00:16:45.909 "peer_address": { 00:16:45.909 "trtype": "TCP", 00:16:45.909 "adrfam": "IPv4", 00:16:45.909 "traddr": "10.0.0.1", 00:16:45.909 "trsvcid": "60478" 00:16:45.909 }, 00:16:45.909 "auth": { 00:16:45.909 "state": "completed", 00:16:45.909 "digest": "sha256", 00:16:45.909 "dhgroup": "ffdhe8192" 00:16:45.909 } 00:16:45.909 } 00:16:45.909 ]' 00:16:45.909 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.909 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.909 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.909 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.909 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.909 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.909 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.909 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.169 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:16:46.170 14:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:16:47.109 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.109 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:47.109 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.109 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.109 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.110 14:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.680 00:16:47.680 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.680 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.680 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.940 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.940 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.940 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.940 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.940 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.940 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.940 { 00:16:47.940 "cntlid": 47, 00:16:47.940 "qid": 0, 00:16:47.940 "state": "enabled", 00:16:47.940 "thread": "nvmf_tgt_poll_group_000", 00:16:47.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:47.940 "listen_address": { 00:16:47.940 "trtype": "TCP", 00:16:47.940 "adrfam": "IPv4", 00:16:47.940 "traddr": "10.0.0.2", 00:16:47.940 "trsvcid": "4420" 00:16:47.940 }, 00:16:47.940 "peer_address": { 00:16:47.940 "trtype": "TCP", 00:16:47.940 "adrfam": "IPv4", 00:16:47.940 "traddr": "10.0.0.1", 00:16:47.940 "trsvcid": "60508" 00:16:47.940 }, 00:16:47.940 "auth": { 00:16:47.940 "state": "completed", 00:16:47.940 "digest": "sha256", 00:16:47.940 "dhgroup": "ffdhe8192" 00:16:47.940 } 00:16:47.940 } 00:16:47.940 ]' 00:16:47.940 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.940 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.940 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.940 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.940 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.940 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.940 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.940 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.200 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:16:48.200 14:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.141 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.401 00:16:49.401 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.401 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.401 14:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.662 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.662 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.662 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.662 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.662 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.662 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.662 { 00:16:49.662 "cntlid": 49, 00:16:49.662 "qid": 0, 00:16:49.662 "state": "enabled", 00:16:49.662 "thread": "nvmf_tgt_poll_group_000", 00:16:49.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:49.662 "listen_address": { 00:16:49.662 "trtype": "TCP", 00:16:49.662 "adrfam": "IPv4", 00:16:49.662 "traddr": "10.0.0.2", 00:16:49.662 "trsvcid": "4420" 00:16:49.662 }, 00:16:49.662 "peer_address": { 00:16:49.662 "trtype": "TCP", 00:16:49.662 "adrfam": "IPv4", 00:16:49.662 "traddr": "10.0.0.1", 00:16:49.662 "trsvcid": "60516" 00:16:49.662 }, 00:16:49.662 "auth": { 00:16:49.662 "state": "completed", 00:16:49.662 "digest": "sha384", 00:16:49.662 "dhgroup": "null" 00:16:49.662 } 00:16:49.662 } 00:16:49.662 ]' 00:16:49.662 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.662 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.662 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.662 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:49.662 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.662 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.662 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.662 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.922 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:49.922 14:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.863 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.123 00:16:51.123 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.123 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.123 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.383 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.383 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.383 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.383 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.383 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.383 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.383 { 00:16:51.383 "cntlid": 51, 00:16:51.383 "qid": 0, 00:16:51.383 "state": "enabled", 00:16:51.383 "thread": "nvmf_tgt_poll_group_000", 00:16:51.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:51.383 "listen_address": { 00:16:51.383 "trtype": "TCP", 00:16:51.383 "adrfam": "IPv4", 00:16:51.383 "traddr": "10.0.0.2", 00:16:51.383 "trsvcid": "4420" 00:16:51.383 }, 00:16:51.383 "peer_address": { 00:16:51.383 "trtype": "TCP", 00:16:51.383 "adrfam": "IPv4", 00:16:51.383 "traddr": "10.0.0.1", 00:16:51.383 "trsvcid": "35896" 00:16:51.383 }, 00:16:51.383 "auth": { 00:16:51.383 "state": "completed", 00:16:51.383 "digest": "sha384", 00:16:51.383 "dhgroup": "null" 00:16:51.383 } 00:16:51.383 } 00:16:51.383 ]' 00:16:51.383 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.383 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.383 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.384 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:51.384 14:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.384 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.384 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.384 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.644 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:51.644 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:52.214 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.475 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:52.475 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.475 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.475 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.475 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.475 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:52.475 14:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:52.475 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:52.475 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.475 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.475 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:52.475 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.475 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.475 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.475 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.475 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.475 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.476 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.476 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.476 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.736 00:16:52.736 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.736 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.736 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.997 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.997 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.997 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.997 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.997 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.997 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.997 { 00:16:52.997 "cntlid": 53, 00:16:52.997 "qid": 0, 00:16:52.997 "state": "enabled", 00:16:52.997 "thread": "nvmf_tgt_poll_group_000", 00:16:52.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:52.997 "listen_address": { 00:16:52.997 "trtype": "TCP", 00:16:52.997 "adrfam": "IPv4", 00:16:52.997 "traddr": "10.0.0.2", 00:16:52.997 "trsvcid": "4420" 00:16:52.997 }, 00:16:52.997 "peer_address": { 00:16:52.997 "trtype": "TCP", 00:16:52.997 "adrfam": "IPv4", 00:16:52.997 "traddr": "10.0.0.1", 00:16:52.997 "trsvcid": "35916" 00:16:52.997 }, 00:16:52.997 "auth": { 00:16:52.997 "state": "completed", 00:16:52.997 "digest": "sha384", 00:16:52.997 "dhgroup": "null" 00:16:52.997 } 00:16:52.997 } 00:16:52.997 ]' 00:16:52.997 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.997 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.997 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.997 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:52.997 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.997 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.997 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.997 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.257 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:16:53.257 14:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.198 14:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.458 00:16:54.458 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.458 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.458 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.719 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.719 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.719 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.719 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.719 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.719 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.719 { 00:16:54.719 "cntlid": 55, 00:16:54.719 "qid": 0, 00:16:54.719 "state": "enabled", 00:16:54.719 "thread": "nvmf_tgt_poll_group_000", 00:16:54.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:54.719 "listen_address": { 00:16:54.719 "trtype": "TCP", 00:16:54.719 "adrfam": "IPv4", 00:16:54.719 "traddr": "10.0.0.2", 00:16:54.719 "trsvcid": "4420" 00:16:54.719 }, 00:16:54.719 "peer_address": { 00:16:54.719 "trtype": "TCP", 00:16:54.719 "adrfam": "IPv4", 00:16:54.719 "traddr": "10.0.0.1", 00:16:54.719 "trsvcid": "35954" 00:16:54.719 }, 00:16:54.719 "auth": { 00:16:54.719 "state": "completed", 00:16:54.719 "digest": "sha384", 00:16:54.719 "dhgroup": "null" 00:16:54.719 } 00:16:54.719 } 00:16:54.719 ]' 00:16:54.719 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.719 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.719 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.719 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:54.719 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.719 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.719 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.719 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.979 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:16:54.979 14:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:16:55.918 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.919 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.179 00:16:56.179 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.179 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.179 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.440 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.440 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.440 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.440 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.440 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.440 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.440 { 00:16:56.440 "cntlid": 57, 00:16:56.440 "qid": 0, 00:16:56.440 "state": "enabled", 00:16:56.440 "thread": "nvmf_tgt_poll_group_000", 00:16:56.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:56.440 "listen_address": { 00:16:56.440 "trtype": "TCP", 00:16:56.440 "adrfam": "IPv4", 00:16:56.440 "traddr": "10.0.0.2", 00:16:56.440 "trsvcid": "4420" 00:16:56.440 }, 00:16:56.440 "peer_address": { 00:16:56.440 "trtype": "TCP", 00:16:56.440 "adrfam": "IPv4", 00:16:56.440 "traddr": "10.0.0.1", 00:16:56.440 "trsvcid": "35996" 00:16:56.440 }, 00:16:56.440 "auth": { 00:16:56.440 "state": "completed", 00:16:56.440 "digest": "sha384", 00:16:56.440 "dhgroup": "ffdhe2048" 00:16:56.440 } 00:16:56.440 } 00:16:56.440 ]' 00:16:56.440 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.440 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.440 14:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.440 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.440 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.440 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.440 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.440 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.700 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:56.700 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:16:57.270 14:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.530 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.531 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.531 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.531 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.531 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.791 00:16:57.791 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.791 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.791 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.052 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.052 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.052 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.052 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.052 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.052 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.052 { 00:16:58.052 "cntlid": 59, 00:16:58.052 "qid": 0, 00:16:58.052 "state": "enabled", 00:16:58.052 "thread": "nvmf_tgt_poll_group_000", 00:16:58.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:58.052 "listen_address": { 00:16:58.052 "trtype": "TCP", 00:16:58.052 "adrfam": "IPv4", 00:16:58.052 "traddr": "10.0.0.2", 00:16:58.052 "trsvcid": "4420" 00:16:58.052 }, 00:16:58.052 "peer_address": { 00:16:58.052 "trtype": "TCP", 00:16:58.052 "adrfam": "IPv4", 00:16:58.052 "traddr": "10.0.0.1", 00:16:58.052 "trsvcid": "36010" 00:16:58.052 }, 00:16:58.052 "auth": { 00:16:58.052 "state": "completed", 00:16:58.052 "digest": "sha384", 00:16:58.052 "dhgroup": "ffdhe2048" 00:16:58.052 } 00:16:58.052 } 00:16:58.052 ]' 00:16:58.052 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.052 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.052 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.052 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.052 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.313 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.313 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.313 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.313 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:58.313 14:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.255 14:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.516 00:16:59.516 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.516 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.516 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.776 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.776 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.776 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.776 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.776 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.776 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.776 { 00:16:59.776 "cntlid": 61, 00:16:59.776 "qid": 0, 00:16:59.776 "state": "enabled", 00:16:59.776 "thread": "nvmf_tgt_poll_group_000", 00:16:59.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:16:59.776 "listen_address": { 00:16:59.776 "trtype": "TCP", 00:16:59.776 "adrfam": "IPv4", 00:16:59.776 "traddr": "10.0.0.2", 00:16:59.776 "trsvcid": "4420" 00:16:59.776 }, 00:16:59.776 "peer_address": { 00:16:59.776 "trtype": "TCP", 00:16:59.776 "adrfam": "IPv4", 00:16:59.776 "traddr": "10.0.0.1", 00:16:59.776 "trsvcid": "36048" 00:16:59.776 }, 00:16:59.776 "auth": { 00:16:59.776 "state": "completed", 00:16:59.776 "digest": "sha384", 00:16:59.776 "dhgroup": "ffdhe2048" 00:16:59.776 } 00:16:59.776 } 00:16:59.776 ]' 00:16:59.776 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.776 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.776 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.776 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:59.776 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.776 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.776 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.776 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.038 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:00.038 14:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:00.981 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.981 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:00.981 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.981 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.981 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.981 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.982 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.243 00:17:01.243 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.243 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.243 14:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.504 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.504 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.504 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.504 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.504 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.504 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.504 { 00:17:01.504 "cntlid": 63, 00:17:01.504 "qid": 0, 00:17:01.504 "state": "enabled", 00:17:01.504 "thread": "nvmf_tgt_poll_group_000", 00:17:01.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:01.504 "listen_address": { 00:17:01.504 "trtype": "TCP", 00:17:01.504 "adrfam": "IPv4", 00:17:01.504 "traddr": "10.0.0.2", 00:17:01.504 "trsvcid": "4420" 00:17:01.504 }, 00:17:01.504 "peer_address": { 00:17:01.504 "trtype": "TCP", 00:17:01.504 "adrfam": "IPv4", 00:17:01.504 "traddr": "10.0.0.1", 00:17:01.504 "trsvcid": "37244" 00:17:01.504 }, 00:17:01.504 "auth": { 00:17:01.504 "state": "completed", 00:17:01.504 "digest": "sha384", 00:17:01.504 "dhgroup": "ffdhe2048" 00:17:01.504 } 00:17:01.504 } 00:17:01.504 ]' 00:17:01.504 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.504 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.504 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.504 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:01.504 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.504 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.504 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.504 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.765 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:01.765 14:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.705 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.965 00:17:02.965 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.965 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.965 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.225 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.225 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.225 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.225 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.225 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.225 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.225 { 00:17:03.225 "cntlid": 65, 00:17:03.225 "qid": 0, 00:17:03.225 "state": "enabled", 00:17:03.225 "thread": "nvmf_tgt_poll_group_000", 00:17:03.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:03.225 "listen_address": { 00:17:03.225 "trtype": "TCP", 00:17:03.225 "adrfam": "IPv4", 00:17:03.225 "traddr": "10.0.0.2", 00:17:03.225 "trsvcid": "4420" 00:17:03.225 }, 00:17:03.225 "peer_address": { 00:17:03.225 "trtype": "TCP", 00:17:03.225 "adrfam": "IPv4", 00:17:03.225 "traddr": "10.0.0.1", 00:17:03.225 "trsvcid": "37252" 00:17:03.225 }, 00:17:03.225 "auth": { 00:17:03.225 "state": "completed", 00:17:03.225 "digest": "sha384", 00:17:03.225 "dhgroup": "ffdhe3072" 00:17:03.225 } 00:17:03.225 } 00:17:03.225 ]' 00:17:03.225 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.225 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.225 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.225 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:03.225 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.225 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.225 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.225 14:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.485 14:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:03.485 14:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:04.426 14:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.426 14:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:04.426 14:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.426 14:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.426 14:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.426 14:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.426 14:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.426 14:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.426 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:04.426 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.426 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.426 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:04.426 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.426 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.426 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.426 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.426 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.426 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.426 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.426 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.426 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.686 00:17:04.686 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.686 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.686 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.947 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.947 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.947 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.947 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.947 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.947 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.947 { 00:17:04.947 "cntlid": 67, 00:17:04.947 "qid": 0, 00:17:04.947 "state": "enabled", 00:17:04.947 "thread": "nvmf_tgt_poll_group_000", 00:17:04.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:04.947 "listen_address": { 00:17:04.947 "trtype": "TCP", 00:17:04.947 "adrfam": "IPv4", 00:17:04.947 "traddr": "10.0.0.2", 00:17:04.947 "trsvcid": "4420" 00:17:04.947 }, 00:17:04.947 "peer_address": { 00:17:04.947 "trtype": "TCP", 00:17:04.947 "adrfam": "IPv4", 00:17:04.947 "traddr": "10.0.0.1", 00:17:04.947 "trsvcid": "37282" 00:17:04.947 }, 00:17:04.947 "auth": { 00:17:04.947 "state": "completed", 00:17:04.947 "digest": "sha384", 00:17:04.947 "dhgroup": "ffdhe3072" 00:17:04.947 } 00:17:04.947 } 00:17:04.947 ]' 00:17:04.947 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.947 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.947 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.947 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.947 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.207 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.207 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.207 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.207 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:05.207 14:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.147 14:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.407 00:17:06.407 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.407 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.407 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.667 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.667 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.667 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.667 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.667 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.667 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.667 { 00:17:06.667 "cntlid": 69, 00:17:06.667 "qid": 0, 00:17:06.667 "state": "enabled", 00:17:06.667 "thread": "nvmf_tgt_poll_group_000", 00:17:06.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:06.667 "listen_address": { 00:17:06.667 "trtype": "TCP", 00:17:06.667 "adrfam": "IPv4", 00:17:06.667 "traddr": "10.0.0.2", 00:17:06.667 "trsvcid": "4420" 00:17:06.667 }, 00:17:06.667 "peer_address": { 00:17:06.667 "trtype": "TCP", 00:17:06.667 "adrfam": "IPv4", 00:17:06.667 "traddr": "10.0.0.1", 00:17:06.667 "trsvcid": "37320" 00:17:06.667 }, 00:17:06.667 "auth": { 00:17:06.667 "state": "completed", 00:17:06.667 "digest": "sha384", 00:17:06.667 "dhgroup": "ffdhe3072" 00:17:06.667 } 00:17:06.667 } 00:17:06.667 ]' 00:17:06.667 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.667 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.667 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.667 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.667 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.927 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.927 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.927 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.927 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:06.928 14:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.868 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.127 00:17:08.127 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.127 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.127 14:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.387 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.388 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.388 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.388 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.388 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.388 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.388 { 00:17:08.388 "cntlid": 71, 00:17:08.388 "qid": 0, 00:17:08.388 "state": "enabled", 00:17:08.388 "thread": "nvmf_tgt_poll_group_000", 00:17:08.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:08.388 "listen_address": { 00:17:08.388 "trtype": "TCP", 00:17:08.388 "adrfam": "IPv4", 00:17:08.388 "traddr": "10.0.0.2", 00:17:08.388 "trsvcid": "4420" 00:17:08.388 }, 00:17:08.388 "peer_address": { 00:17:08.388 "trtype": "TCP", 00:17:08.388 "adrfam": "IPv4", 00:17:08.388 "traddr": "10.0.0.1", 00:17:08.388 "trsvcid": "37340" 00:17:08.388 }, 00:17:08.388 "auth": { 00:17:08.388 "state": "completed", 00:17:08.388 "digest": "sha384", 00:17:08.388 "dhgroup": "ffdhe3072" 00:17:08.388 } 00:17:08.388 } 00:17:08.388 ]' 00:17:08.388 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.388 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.388 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.388 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:08.388 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.649 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.649 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.649 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.649 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:08.649 14:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.592 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.853 00:17:09.853 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.853 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.853 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.114 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.114 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.114 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.114 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.114 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.114 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.114 { 00:17:10.114 "cntlid": 73, 00:17:10.114 "qid": 0, 00:17:10.114 "state": "enabled", 00:17:10.114 "thread": "nvmf_tgt_poll_group_000", 00:17:10.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:10.114 "listen_address": { 00:17:10.114 "trtype": "TCP", 00:17:10.114 "adrfam": "IPv4", 00:17:10.114 "traddr": "10.0.0.2", 00:17:10.114 "trsvcid": "4420" 00:17:10.114 }, 00:17:10.114 "peer_address": { 00:17:10.114 "trtype": "TCP", 00:17:10.114 "adrfam": "IPv4", 00:17:10.114 "traddr": "10.0.0.1", 00:17:10.114 "trsvcid": "37376" 00:17:10.114 }, 00:17:10.114 "auth": { 00:17:10.114 "state": "completed", 00:17:10.114 "digest": "sha384", 00:17:10.114 "dhgroup": "ffdhe4096" 00:17:10.114 } 00:17:10.114 } 00:17:10.114 ]' 00:17:10.114 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.114 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.114 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.114 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.114 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.375 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.375 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.375 14:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.375 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:10.375 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:11.317 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.317 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:11.317 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.317 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.317 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.317 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.317 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.317 14:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:11.578 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:11.578 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.578 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.578 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:11.578 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:11.578 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.578 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.578 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.578 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.578 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.578 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.578 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.578 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.839 00:17:11.839 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.839 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.839 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.840 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.840 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.840 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.840 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.840 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.840 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.840 { 00:17:11.840 "cntlid": 75, 00:17:11.840 "qid": 0, 00:17:11.840 "state": "enabled", 00:17:11.840 "thread": "nvmf_tgt_poll_group_000", 00:17:11.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:11.840 "listen_address": { 00:17:11.840 "trtype": "TCP", 00:17:11.840 "adrfam": "IPv4", 00:17:11.840 "traddr": "10.0.0.2", 00:17:11.840 "trsvcid": "4420" 00:17:11.840 }, 00:17:11.840 "peer_address": { 00:17:11.840 "trtype": "TCP", 00:17:11.840 "adrfam": "IPv4", 00:17:11.840 "traddr": "10.0.0.1", 00:17:11.840 "trsvcid": "53352" 00:17:11.840 }, 00:17:11.840 "auth": { 00:17:11.840 "state": "completed", 00:17:11.840 "digest": "sha384", 00:17:11.840 "dhgroup": "ffdhe4096" 00:17:11.840 } 00:17:11.840 } 00:17:11.840 ]' 00:17:11.840 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.101 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.101 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.101 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:12.101 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.101 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.101 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.101 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.362 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:12.362 14:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:12.933 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.933 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:12.933 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.933 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.933 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.933 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.933 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:12.933 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:13.193 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:13.193 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.193 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.193 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:13.193 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:13.193 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.193 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.193 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.193 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.193 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.193 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.193 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.193 14:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.454 00:17:13.455 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.455 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.455 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.715 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.715 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.715 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.715 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.715 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.715 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.715 { 00:17:13.715 "cntlid": 77, 00:17:13.715 "qid": 0, 00:17:13.715 "state": "enabled", 00:17:13.715 "thread": "nvmf_tgt_poll_group_000", 00:17:13.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:13.715 "listen_address": { 00:17:13.715 "trtype": "TCP", 00:17:13.715 "adrfam": "IPv4", 00:17:13.715 "traddr": "10.0.0.2", 00:17:13.715 "trsvcid": "4420" 00:17:13.715 }, 00:17:13.715 "peer_address": { 00:17:13.715 "trtype": "TCP", 00:17:13.715 "adrfam": "IPv4", 00:17:13.715 "traddr": "10.0.0.1", 00:17:13.715 "trsvcid": "53384" 00:17:13.715 }, 00:17:13.715 "auth": { 00:17:13.715 "state": "completed", 00:17:13.715 "digest": "sha384", 00:17:13.715 "dhgroup": "ffdhe4096" 00:17:13.715 } 00:17:13.715 } 00:17:13.715 ]' 00:17:13.715 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.715 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.715 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.715 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.715 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.715 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.715 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.715 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.976 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:13.976 14:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.917 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.177 00:17:15.177 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.177 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.177 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.438 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.438 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.438 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.438 14:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.438 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.438 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.438 { 00:17:15.438 "cntlid": 79, 00:17:15.438 "qid": 0, 00:17:15.438 "state": "enabled", 00:17:15.438 "thread": "nvmf_tgt_poll_group_000", 00:17:15.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:15.438 "listen_address": { 00:17:15.438 "trtype": "TCP", 00:17:15.438 "adrfam": "IPv4", 00:17:15.438 "traddr": "10.0.0.2", 00:17:15.438 "trsvcid": "4420" 00:17:15.438 }, 00:17:15.438 "peer_address": { 00:17:15.438 "trtype": "TCP", 00:17:15.438 "adrfam": "IPv4", 00:17:15.438 "traddr": "10.0.0.1", 00:17:15.438 "trsvcid": "53410" 00:17:15.438 }, 00:17:15.438 "auth": { 00:17:15.438 "state": "completed", 00:17:15.438 "digest": "sha384", 00:17:15.438 "dhgroup": "ffdhe4096" 00:17:15.438 } 00:17:15.438 } 00:17:15.438 ]' 00:17:15.438 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.438 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.438 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.438 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:15.438 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.438 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.438 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.438 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.698 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:15.698 14:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:16.638 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.638 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:16.638 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.638 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.638 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.638 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.638 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.638 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.638 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.639 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:16.639 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.639 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.639 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:16.639 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.639 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.639 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.639 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.639 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.639 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.639 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.639 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.639 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.898 00:17:16.898 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.898 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.898 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.158 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.158 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.158 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.158 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.158 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.158 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.158 { 00:17:17.158 "cntlid": 81, 00:17:17.158 "qid": 0, 00:17:17.158 "state": "enabled", 00:17:17.158 "thread": "nvmf_tgt_poll_group_000", 00:17:17.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:17.158 "listen_address": { 00:17:17.158 "trtype": "TCP", 00:17:17.158 "adrfam": "IPv4", 00:17:17.158 "traddr": "10.0.0.2", 00:17:17.158 "trsvcid": "4420" 00:17:17.158 }, 00:17:17.158 "peer_address": { 00:17:17.158 "trtype": "TCP", 00:17:17.158 "adrfam": "IPv4", 00:17:17.158 "traddr": "10.0.0.1", 00:17:17.158 "trsvcid": "53428" 00:17:17.158 }, 00:17:17.158 "auth": { 00:17:17.159 "state": "completed", 00:17:17.159 "digest": "sha384", 00:17:17.159 "dhgroup": "ffdhe6144" 00:17:17.159 } 00:17:17.159 } 00:17:17.159 ]' 00:17:17.159 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.159 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.159 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.417 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:17.417 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.417 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.417 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.418 14:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.418 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:17.418 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:18.357 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.357 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:18.357 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.357 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.357 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.357 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.357 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.357 14:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:18.357 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:18.357 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.357 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.357 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:18.357 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:18.357 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.357 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.357 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.357 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.357 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.357 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.357 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.357 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.928 00:17:18.928 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.928 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.928 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.928 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.928 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.928 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.928 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.928 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.928 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.928 { 00:17:18.928 "cntlid": 83, 00:17:18.928 "qid": 0, 00:17:18.928 "state": "enabled", 00:17:18.928 "thread": "nvmf_tgt_poll_group_000", 00:17:18.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:18.928 "listen_address": { 00:17:18.928 "trtype": "TCP", 00:17:18.928 "adrfam": "IPv4", 00:17:18.928 "traddr": "10.0.0.2", 00:17:18.928 "trsvcid": "4420" 00:17:18.928 }, 00:17:18.928 "peer_address": { 00:17:18.928 "trtype": "TCP", 00:17:18.928 "adrfam": "IPv4", 00:17:18.928 "traddr": "10.0.0.1", 00:17:18.928 "trsvcid": "53456" 00:17:18.928 }, 00:17:18.928 "auth": { 00:17:18.928 "state": "completed", 00:17:18.928 "digest": "sha384", 00:17:18.928 "dhgroup": "ffdhe6144" 00:17:18.928 } 00:17:18.928 } 00:17:18.928 ]' 00:17:18.928 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.928 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.928 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.188 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:19.188 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.188 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.188 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.188 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.189 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:19.189 14:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.130 14:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.701 00:17:20.701 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.701 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.701 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.701 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.701 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.701 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.701 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.701 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.701 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.701 { 00:17:20.701 "cntlid": 85, 00:17:20.701 "qid": 0, 00:17:20.701 "state": "enabled", 00:17:20.701 "thread": "nvmf_tgt_poll_group_000", 00:17:20.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:20.701 "listen_address": { 00:17:20.701 "trtype": "TCP", 00:17:20.701 "adrfam": "IPv4", 00:17:20.701 "traddr": "10.0.0.2", 00:17:20.701 "trsvcid": "4420" 00:17:20.701 }, 00:17:20.701 "peer_address": { 00:17:20.701 "trtype": "TCP", 00:17:20.701 "adrfam": "IPv4", 00:17:20.701 "traddr": "10.0.0.1", 00:17:20.701 "trsvcid": "56076" 00:17:20.701 }, 00:17:20.701 "auth": { 00:17:20.701 "state": "completed", 00:17:20.701 "digest": "sha384", 00:17:20.701 "dhgroup": "ffdhe6144" 00:17:20.701 } 00:17:20.701 } 00:17:20.701 ]' 00:17:20.701 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.701 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.701 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.701 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.701 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.962 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.962 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.962 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.962 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:20.962 14:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.901 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.471 00:17:22.471 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.471 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.471 14:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.471 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.471 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.471 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.471 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.471 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.471 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.471 { 00:17:22.471 "cntlid": 87, 00:17:22.471 "qid": 0, 00:17:22.471 "state": "enabled", 00:17:22.471 "thread": "nvmf_tgt_poll_group_000", 00:17:22.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:22.471 "listen_address": { 00:17:22.471 "trtype": "TCP", 00:17:22.471 "adrfam": "IPv4", 00:17:22.471 "traddr": "10.0.0.2", 00:17:22.471 "trsvcid": "4420" 00:17:22.471 }, 00:17:22.471 "peer_address": { 00:17:22.471 "trtype": "TCP", 00:17:22.471 "adrfam": "IPv4", 00:17:22.471 "traddr": "10.0.0.1", 00:17:22.471 "trsvcid": "56110" 00:17:22.471 }, 00:17:22.471 "auth": { 00:17:22.471 "state": "completed", 00:17:22.471 "digest": "sha384", 00:17:22.471 "dhgroup": "ffdhe6144" 00:17:22.471 } 00:17:22.471 } 00:17:22.471 ]' 00:17:22.471 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.471 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.471 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.471 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.471 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.730 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.730 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.730 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.730 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:22.730 14:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.669 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.240 00:17:24.240 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.240 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.240 14:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.501 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.501 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.501 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.501 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.501 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.501 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.501 { 00:17:24.501 "cntlid": 89, 00:17:24.501 "qid": 0, 00:17:24.501 "state": "enabled", 00:17:24.501 "thread": "nvmf_tgt_poll_group_000", 00:17:24.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:24.501 "listen_address": { 00:17:24.501 "trtype": "TCP", 00:17:24.501 "adrfam": "IPv4", 00:17:24.501 "traddr": "10.0.0.2", 00:17:24.501 "trsvcid": "4420" 00:17:24.501 }, 00:17:24.501 "peer_address": { 00:17:24.501 "trtype": "TCP", 00:17:24.501 "adrfam": "IPv4", 00:17:24.501 "traddr": "10.0.0.1", 00:17:24.501 "trsvcid": "56144" 00:17:24.501 }, 00:17:24.501 "auth": { 00:17:24.501 "state": "completed", 00:17:24.501 "digest": "sha384", 00:17:24.501 "dhgroup": "ffdhe8192" 00:17:24.501 } 00:17:24.501 } 00:17:24.501 ]' 00:17:24.501 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.501 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.501 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.501 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.501 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.501 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.501 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.501 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.761 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:24.761 14:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.702 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.273 00:17:26.273 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.273 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.273 14:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.533 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.533 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.533 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.533 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.533 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.533 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.533 { 00:17:26.533 "cntlid": 91, 00:17:26.533 "qid": 0, 00:17:26.533 "state": "enabled", 00:17:26.533 "thread": "nvmf_tgt_poll_group_000", 00:17:26.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:26.533 "listen_address": { 00:17:26.533 "trtype": "TCP", 00:17:26.533 "adrfam": "IPv4", 00:17:26.533 "traddr": "10.0.0.2", 00:17:26.533 "trsvcid": "4420" 00:17:26.533 }, 00:17:26.533 "peer_address": { 00:17:26.533 "trtype": "TCP", 00:17:26.533 "adrfam": "IPv4", 00:17:26.533 "traddr": "10.0.0.1", 00:17:26.533 "trsvcid": "56168" 00:17:26.533 }, 00:17:26.533 "auth": { 00:17:26.533 "state": "completed", 00:17:26.533 "digest": "sha384", 00:17:26.533 "dhgroup": "ffdhe8192" 00:17:26.533 } 00:17:26.533 } 00:17:26.533 ]' 00:17:26.533 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.534 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.534 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.534 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.534 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.534 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.534 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.534 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.793 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:26.793 14:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.734 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.305 00:17:28.305 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.305 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.305 14:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.565 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.565 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.565 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.565 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.565 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.565 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.565 { 00:17:28.565 "cntlid": 93, 00:17:28.565 "qid": 0, 00:17:28.565 "state": "enabled", 00:17:28.565 "thread": "nvmf_tgt_poll_group_000", 00:17:28.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:28.565 "listen_address": { 00:17:28.565 "trtype": "TCP", 00:17:28.565 "adrfam": "IPv4", 00:17:28.565 "traddr": "10.0.0.2", 00:17:28.565 "trsvcid": "4420" 00:17:28.565 }, 00:17:28.565 "peer_address": { 00:17:28.565 "trtype": "TCP", 00:17:28.565 "adrfam": "IPv4", 00:17:28.565 "traddr": "10.0.0.1", 00:17:28.565 "trsvcid": "56194" 00:17:28.565 }, 00:17:28.565 "auth": { 00:17:28.565 "state": "completed", 00:17:28.565 "digest": "sha384", 00:17:28.565 "dhgroup": "ffdhe8192" 00:17:28.565 } 00:17:28.565 } 00:17:28.565 ]' 00:17:28.565 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.565 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.565 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.565 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.565 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.565 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.565 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.565 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.825 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:28.825 14:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:29.394 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.655 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.225 00:17:30.225 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.225 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.225 14:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.485 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.485 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.485 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.485 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.485 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.485 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.485 { 00:17:30.485 "cntlid": 95, 00:17:30.485 "qid": 0, 00:17:30.485 "state": "enabled", 00:17:30.485 "thread": "nvmf_tgt_poll_group_000", 00:17:30.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:30.485 "listen_address": { 00:17:30.485 "trtype": "TCP", 00:17:30.485 "adrfam": "IPv4", 00:17:30.485 "traddr": "10.0.0.2", 00:17:30.485 "trsvcid": "4420" 00:17:30.485 }, 00:17:30.485 "peer_address": { 00:17:30.485 "trtype": "TCP", 00:17:30.485 "adrfam": "IPv4", 00:17:30.485 "traddr": "10.0.0.1", 00:17:30.485 "trsvcid": "56232" 00:17:30.485 }, 00:17:30.485 "auth": { 00:17:30.485 "state": "completed", 00:17:30.485 "digest": "sha384", 00:17:30.485 "dhgroup": "ffdhe8192" 00:17:30.485 } 00:17:30.485 } 00:17:30.485 ]' 00:17:30.485 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.485 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.485 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.485 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.485 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.485 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.485 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.485 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.745 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:30.745 14:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.685 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.945 00:17:31.945 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.945 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.945 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.206 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.206 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.206 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.206 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.206 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.206 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.206 { 00:17:32.206 "cntlid": 97, 00:17:32.206 "qid": 0, 00:17:32.206 "state": "enabled", 00:17:32.206 "thread": "nvmf_tgt_poll_group_000", 00:17:32.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:32.206 "listen_address": { 00:17:32.206 "trtype": "TCP", 00:17:32.206 "adrfam": "IPv4", 00:17:32.206 "traddr": "10.0.0.2", 00:17:32.206 "trsvcid": "4420" 00:17:32.206 }, 00:17:32.206 "peer_address": { 00:17:32.206 "trtype": "TCP", 00:17:32.206 "adrfam": "IPv4", 00:17:32.206 "traddr": "10.0.0.1", 00:17:32.206 "trsvcid": "50770" 00:17:32.206 }, 00:17:32.206 "auth": { 00:17:32.206 "state": "completed", 00:17:32.206 "digest": "sha512", 00:17:32.206 "dhgroup": "null" 00:17:32.206 } 00:17:32.206 } 00:17:32.206 ]' 00:17:32.206 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.206 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.206 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.206 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:32.206 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.206 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.206 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.206 14:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.466 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:32.466 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.409 14:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.670 00:17:33.670 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.670 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.670 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.670 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.670 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.670 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.670 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.670 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.670 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.670 { 00:17:33.670 "cntlid": 99, 00:17:33.670 "qid": 0, 00:17:33.670 "state": "enabled", 00:17:33.670 "thread": "nvmf_tgt_poll_group_000", 00:17:33.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:33.670 "listen_address": { 00:17:33.670 "trtype": "TCP", 00:17:33.670 "adrfam": "IPv4", 00:17:33.670 "traddr": "10.0.0.2", 00:17:33.670 "trsvcid": "4420" 00:17:33.670 }, 00:17:33.670 "peer_address": { 00:17:33.670 "trtype": "TCP", 00:17:33.670 "adrfam": "IPv4", 00:17:33.670 "traddr": "10.0.0.1", 00:17:33.670 "trsvcid": "50798" 00:17:33.670 }, 00:17:33.670 "auth": { 00:17:33.670 "state": "completed", 00:17:33.670 "digest": "sha512", 00:17:33.670 "dhgroup": "null" 00:17:33.670 } 00:17:33.670 } 00:17:33.670 ]' 00:17:33.670 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.930 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.930 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.930 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:33.930 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.930 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.930 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.930 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.191 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:34.191 14:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:34.761 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.761 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:34.761 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.761 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.761 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.761 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.761 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:34.761 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:35.022 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:35.022 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.022 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.022 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:35.022 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.022 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.022 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.022 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.022 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.022 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.022 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.022 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.022 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.282 00:17:35.282 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.282 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.282 14:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.542 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.542 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.542 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.542 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.542 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.542 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.542 { 00:17:35.542 "cntlid": 101, 00:17:35.542 "qid": 0, 00:17:35.542 "state": "enabled", 00:17:35.542 "thread": "nvmf_tgt_poll_group_000", 00:17:35.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:35.542 "listen_address": { 00:17:35.542 "trtype": "TCP", 00:17:35.542 "adrfam": "IPv4", 00:17:35.542 "traddr": "10.0.0.2", 00:17:35.542 "trsvcid": "4420" 00:17:35.542 }, 00:17:35.542 "peer_address": { 00:17:35.542 "trtype": "TCP", 00:17:35.542 "adrfam": "IPv4", 00:17:35.542 "traddr": "10.0.0.1", 00:17:35.542 "trsvcid": "50830" 00:17:35.542 }, 00:17:35.542 "auth": { 00:17:35.542 "state": "completed", 00:17:35.542 "digest": "sha512", 00:17:35.542 "dhgroup": "null" 00:17:35.542 } 00:17:35.542 } 00:17:35.542 ]' 00:17:35.542 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.542 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.542 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.542 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:35.542 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.542 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.542 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.542 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.802 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:35.802 14:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.743 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.003 00:17:37.003 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.003 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.004 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.265 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.265 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.265 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.265 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.265 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.265 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.265 { 00:17:37.265 "cntlid": 103, 00:17:37.265 "qid": 0, 00:17:37.265 "state": "enabled", 00:17:37.265 "thread": "nvmf_tgt_poll_group_000", 00:17:37.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:37.265 "listen_address": { 00:17:37.265 "trtype": "TCP", 00:17:37.265 "adrfam": "IPv4", 00:17:37.265 "traddr": "10.0.0.2", 00:17:37.265 "trsvcid": "4420" 00:17:37.265 }, 00:17:37.265 "peer_address": { 00:17:37.265 "trtype": "TCP", 00:17:37.265 "adrfam": "IPv4", 00:17:37.265 "traddr": "10.0.0.1", 00:17:37.265 "trsvcid": "50856" 00:17:37.265 }, 00:17:37.265 "auth": { 00:17:37.265 "state": "completed", 00:17:37.265 "digest": "sha512", 00:17:37.265 "dhgroup": "null" 00:17:37.265 } 00:17:37.265 } 00:17:37.265 ]' 00:17:37.265 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.265 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.265 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.265 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:37.265 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.265 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.265 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.265 14:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.526 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:37.526 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:38.466 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.466 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:38.466 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.466 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.466 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.466 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.466 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.466 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:38.466 14:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:38.466 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:38.466 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.466 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.466 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:38.466 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.466 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.466 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.466 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.466 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.466 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.466 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.466 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.466 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.726 00:17:38.726 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.726 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.726 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.726 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.726 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.726 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.726 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.726 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.726 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.726 { 00:17:38.726 "cntlid": 105, 00:17:38.726 "qid": 0, 00:17:38.726 "state": "enabled", 00:17:38.726 "thread": "nvmf_tgt_poll_group_000", 00:17:38.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:38.726 "listen_address": { 00:17:38.726 "trtype": "TCP", 00:17:38.726 "adrfam": "IPv4", 00:17:38.726 "traddr": "10.0.0.2", 00:17:38.726 "trsvcid": "4420" 00:17:38.726 }, 00:17:38.726 "peer_address": { 00:17:38.726 "trtype": "TCP", 00:17:38.726 "adrfam": "IPv4", 00:17:38.726 "traddr": "10.0.0.1", 00:17:38.726 "trsvcid": "50898" 00:17:38.726 }, 00:17:38.726 "auth": { 00:17:38.726 "state": "completed", 00:17:38.726 "digest": "sha512", 00:17:38.726 "dhgroup": "ffdhe2048" 00:17:38.726 } 00:17:38.726 } 00:17:38.726 ]' 00:17:38.726 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.987 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.987 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.987 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:38.987 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.987 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.987 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.987 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.247 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:39.247 14:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:39.818 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.818 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:39.818 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.818 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.818 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.818 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.818 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:39.818 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:40.078 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:40.078 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.079 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.079 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:40.079 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:40.079 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.079 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.079 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.079 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.079 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.079 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.079 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.079 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.339 00:17:40.339 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.339 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.339 14:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.600 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.600 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.600 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.600 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.600 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.600 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.600 { 00:17:40.600 "cntlid": 107, 00:17:40.600 "qid": 0, 00:17:40.600 "state": "enabled", 00:17:40.600 "thread": "nvmf_tgt_poll_group_000", 00:17:40.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:40.600 "listen_address": { 00:17:40.600 "trtype": "TCP", 00:17:40.600 "adrfam": "IPv4", 00:17:40.600 "traddr": "10.0.0.2", 00:17:40.600 "trsvcid": "4420" 00:17:40.600 }, 00:17:40.600 "peer_address": { 00:17:40.600 "trtype": "TCP", 00:17:40.600 "adrfam": "IPv4", 00:17:40.600 "traddr": "10.0.0.1", 00:17:40.600 "trsvcid": "41472" 00:17:40.600 }, 00:17:40.600 "auth": { 00:17:40.600 "state": "completed", 00:17:40.600 "digest": "sha512", 00:17:40.600 "dhgroup": "ffdhe2048" 00:17:40.600 } 00:17:40.600 } 00:17:40.600 ]' 00:17:40.600 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.600 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.600 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.600 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.600 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.600 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.600 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.600 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.861 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:40.861 14:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:41.801 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.802 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.062 00:17:42.062 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.062 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.062 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.323 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.323 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.323 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.323 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.323 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.323 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.323 { 00:17:42.323 "cntlid": 109, 00:17:42.323 "qid": 0, 00:17:42.323 "state": "enabled", 00:17:42.323 "thread": "nvmf_tgt_poll_group_000", 00:17:42.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:42.323 "listen_address": { 00:17:42.323 "trtype": "TCP", 00:17:42.323 "adrfam": "IPv4", 00:17:42.323 "traddr": "10.0.0.2", 00:17:42.323 "trsvcid": "4420" 00:17:42.323 }, 00:17:42.323 "peer_address": { 00:17:42.323 "trtype": "TCP", 00:17:42.323 "adrfam": "IPv4", 00:17:42.323 "traddr": "10.0.0.1", 00:17:42.323 "trsvcid": "41506" 00:17:42.323 }, 00:17:42.323 "auth": { 00:17:42.323 "state": "completed", 00:17:42.323 "digest": "sha512", 00:17:42.323 "dhgroup": "ffdhe2048" 00:17:42.323 } 00:17:42.323 } 00:17:42.323 ]' 00:17:42.323 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.323 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.323 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.323 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:42.323 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.323 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.323 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.323 14:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.583 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:42.583 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:43.523 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.523 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.523 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.523 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.523 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.523 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.523 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:43.523 14:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:43.523 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:43.523 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.523 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.523 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:43.523 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.523 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.523 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:43.523 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.523 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.523 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.523 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.523 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.523 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.784 00:17:43.784 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.784 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.784 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.784 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.784 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.784 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.784 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.045 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.045 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.045 { 00:17:44.045 "cntlid": 111, 00:17:44.045 "qid": 0, 00:17:44.045 "state": "enabled", 00:17:44.045 "thread": "nvmf_tgt_poll_group_000", 00:17:44.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:44.045 "listen_address": { 00:17:44.045 "trtype": "TCP", 00:17:44.045 "adrfam": "IPv4", 00:17:44.045 "traddr": "10.0.0.2", 00:17:44.045 "trsvcid": "4420" 00:17:44.045 }, 00:17:44.045 "peer_address": { 00:17:44.045 "trtype": "TCP", 00:17:44.045 "adrfam": "IPv4", 00:17:44.045 "traddr": "10.0.0.1", 00:17:44.045 "trsvcid": "41520" 00:17:44.045 }, 00:17:44.045 "auth": { 00:17:44.045 "state": "completed", 00:17:44.045 "digest": "sha512", 00:17:44.045 "dhgroup": "ffdhe2048" 00:17:44.045 } 00:17:44.045 } 00:17:44.045 ]' 00:17:44.045 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.045 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.045 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.045 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:44.045 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.045 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.045 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.045 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.305 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:44.305 14:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:44.877 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.139 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:45.139 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.139 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.139 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.139 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.139 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.139 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.139 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:45.139 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:45.139 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.140 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.140 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:45.140 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.140 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.140 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.140 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.140 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.140 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.140 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.140 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.140 14:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.400 00:17:45.400 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.400 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.400 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.661 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.661 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.661 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.661 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.661 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.661 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.661 { 00:17:45.661 "cntlid": 113, 00:17:45.661 "qid": 0, 00:17:45.661 "state": "enabled", 00:17:45.661 "thread": "nvmf_tgt_poll_group_000", 00:17:45.661 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:45.661 "listen_address": { 00:17:45.661 "trtype": "TCP", 00:17:45.661 "adrfam": "IPv4", 00:17:45.661 "traddr": "10.0.0.2", 00:17:45.661 "trsvcid": "4420" 00:17:45.661 }, 00:17:45.661 "peer_address": { 00:17:45.661 "trtype": "TCP", 00:17:45.661 "adrfam": "IPv4", 00:17:45.661 "traddr": "10.0.0.1", 00:17:45.661 "trsvcid": "41540" 00:17:45.661 }, 00:17:45.661 "auth": { 00:17:45.661 "state": "completed", 00:17:45.661 "digest": "sha512", 00:17:45.661 "dhgroup": "ffdhe3072" 00:17:45.661 } 00:17:45.661 } 00:17:45.661 ]' 00:17:45.661 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.661 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.661 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.661 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:45.661 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.922 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.922 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.922 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.922 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:45.922 14:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.862 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.122 00:17:47.122 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.122 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.122 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.383 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.383 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.383 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.383 14:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.383 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.383 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.383 { 00:17:47.383 "cntlid": 115, 00:17:47.383 "qid": 0, 00:17:47.383 "state": "enabled", 00:17:47.383 "thread": "nvmf_tgt_poll_group_000", 00:17:47.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:47.383 "listen_address": { 00:17:47.383 "trtype": "TCP", 00:17:47.383 "adrfam": "IPv4", 00:17:47.383 "traddr": "10.0.0.2", 00:17:47.383 "trsvcid": "4420" 00:17:47.383 }, 00:17:47.383 "peer_address": { 00:17:47.383 "trtype": "TCP", 00:17:47.383 "adrfam": "IPv4", 00:17:47.383 "traddr": "10.0.0.1", 00:17:47.383 "trsvcid": "41570" 00:17:47.383 }, 00:17:47.383 "auth": { 00:17:47.383 "state": "completed", 00:17:47.383 "digest": "sha512", 00:17:47.383 "dhgroup": "ffdhe3072" 00:17:47.383 } 00:17:47.383 } 00:17:47.383 ]' 00:17:47.383 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.383 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.383 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.383 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:47.383 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.642 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.642 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.642 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.642 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:47.642 14:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:48.580 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.580 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:48.580 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.580 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.581 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.841 00:17:48.841 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.841 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.841 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.101 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.101 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.101 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.101 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.101 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.101 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.101 { 00:17:49.101 "cntlid": 117, 00:17:49.101 "qid": 0, 00:17:49.101 "state": "enabled", 00:17:49.101 "thread": "nvmf_tgt_poll_group_000", 00:17:49.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:49.101 "listen_address": { 00:17:49.101 "trtype": "TCP", 00:17:49.101 "adrfam": "IPv4", 00:17:49.101 "traddr": "10.0.0.2", 00:17:49.101 "trsvcid": "4420" 00:17:49.101 }, 00:17:49.101 "peer_address": { 00:17:49.101 "trtype": "TCP", 00:17:49.101 "adrfam": "IPv4", 00:17:49.101 "traddr": "10.0.0.1", 00:17:49.101 "trsvcid": "41592" 00:17:49.101 }, 00:17:49.101 "auth": { 00:17:49.101 "state": "completed", 00:17:49.101 "digest": "sha512", 00:17:49.101 "dhgroup": "ffdhe3072" 00:17:49.101 } 00:17:49.101 } 00:17:49.101 ]' 00:17:49.101 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.101 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.101 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.101 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.101 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.362 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.362 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.362 14:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.362 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:49.362 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.303 14:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.564 00:17:50.564 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.564 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.564 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.825 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.825 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.825 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.825 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.825 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.825 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.825 { 00:17:50.826 "cntlid": 119, 00:17:50.826 "qid": 0, 00:17:50.826 "state": "enabled", 00:17:50.826 "thread": "nvmf_tgt_poll_group_000", 00:17:50.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:50.826 "listen_address": { 00:17:50.826 "trtype": "TCP", 00:17:50.826 "adrfam": "IPv4", 00:17:50.826 "traddr": "10.0.0.2", 00:17:50.826 "trsvcid": "4420" 00:17:50.826 }, 00:17:50.826 "peer_address": { 00:17:50.826 "trtype": "TCP", 00:17:50.826 "adrfam": "IPv4", 00:17:50.826 "traddr": "10.0.0.1", 00:17:50.826 "trsvcid": "55920" 00:17:50.826 }, 00:17:50.826 "auth": { 00:17:50.826 "state": "completed", 00:17:50.826 "digest": "sha512", 00:17:50.826 "dhgroup": "ffdhe3072" 00:17:50.826 } 00:17:50.826 } 00:17:50.826 ]' 00:17:50.826 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.826 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.826 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.826 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:50.826 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.087 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.087 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.087 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.087 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:51.087 14:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:52.027 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.027 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:52.027 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.027 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.028 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.290 00:17:52.290 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.290 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.290 14:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.550 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.550 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.550 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.550 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.550 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.550 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.550 { 00:17:52.550 "cntlid": 121, 00:17:52.551 "qid": 0, 00:17:52.551 "state": "enabled", 00:17:52.551 "thread": "nvmf_tgt_poll_group_000", 00:17:52.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:52.551 "listen_address": { 00:17:52.551 "trtype": "TCP", 00:17:52.551 "adrfam": "IPv4", 00:17:52.551 "traddr": "10.0.0.2", 00:17:52.551 "trsvcid": "4420" 00:17:52.551 }, 00:17:52.551 "peer_address": { 00:17:52.551 "trtype": "TCP", 00:17:52.551 "adrfam": "IPv4", 00:17:52.551 "traddr": "10.0.0.1", 00:17:52.551 "trsvcid": "55956" 00:17:52.551 }, 00:17:52.551 "auth": { 00:17:52.551 "state": "completed", 00:17:52.551 "digest": "sha512", 00:17:52.551 "dhgroup": "ffdhe4096" 00:17:52.551 } 00:17:52.551 } 00:17:52.551 ]' 00:17:52.551 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.551 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.551 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.811 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.811 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.811 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.811 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.811 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.811 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:52.811 14:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.754 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.015 00:17:54.015 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.015 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.015 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.276 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.276 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.276 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.276 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.276 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.276 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.276 { 00:17:54.276 "cntlid": 123, 00:17:54.276 "qid": 0, 00:17:54.276 "state": "enabled", 00:17:54.277 "thread": "nvmf_tgt_poll_group_000", 00:17:54.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:54.277 "listen_address": { 00:17:54.277 "trtype": "TCP", 00:17:54.277 "adrfam": "IPv4", 00:17:54.277 "traddr": "10.0.0.2", 00:17:54.277 "trsvcid": "4420" 00:17:54.277 }, 00:17:54.277 "peer_address": { 00:17:54.277 "trtype": "TCP", 00:17:54.277 "adrfam": "IPv4", 00:17:54.277 "traddr": "10.0.0.1", 00:17:54.277 "trsvcid": "55986" 00:17:54.277 }, 00:17:54.277 "auth": { 00:17:54.277 "state": "completed", 00:17:54.277 "digest": "sha512", 00:17:54.277 "dhgroup": "ffdhe4096" 00:17:54.277 } 00:17:54.277 } 00:17:54.277 ]' 00:17:54.277 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.277 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.277 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.277 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.277 14:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.537 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.537 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.537 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.537 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:54.537 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:17:55.481 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.481 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:55.481 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.481 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.481 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.481 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.481 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:55.481 14:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:55.481 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:55.481 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.481 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.481 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:55.481 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:55.481 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.481 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.481 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.481 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.481 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.481 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.481 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.481 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.743 00:17:55.743 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.743 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.743 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.003 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.004 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.004 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.004 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.004 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.004 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.004 { 00:17:56.004 "cntlid": 125, 00:17:56.004 "qid": 0, 00:17:56.004 "state": "enabled", 00:17:56.004 "thread": "nvmf_tgt_poll_group_000", 00:17:56.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:56.004 "listen_address": { 00:17:56.004 "trtype": "TCP", 00:17:56.004 "adrfam": "IPv4", 00:17:56.004 "traddr": "10.0.0.2", 00:17:56.004 "trsvcid": "4420" 00:17:56.004 }, 00:17:56.004 "peer_address": { 00:17:56.004 "trtype": "TCP", 00:17:56.004 "adrfam": "IPv4", 00:17:56.004 "traddr": "10.0.0.1", 00:17:56.004 "trsvcid": "56002" 00:17:56.004 }, 00:17:56.004 "auth": { 00:17:56.004 "state": "completed", 00:17:56.004 "digest": "sha512", 00:17:56.004 "dhgroup": "ffdhe4096" 00:17:56.004 } 00:17:56.004 } 00:17:56.004 ]' 00:17:56.004 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.004 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.004 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.004 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.004 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.265 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.265 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.265 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.265 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:56.265 14:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.210 14:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.470 00:17:57.730 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.730 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.730 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.730 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.731 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.731 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.731 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.731 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.731 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.731 { 00:17:57.731 "cntlid": 127, 00:17:57.731 "qid": 0, 00:17:57.731 "state": "enabled", 00:17:57.731 "thread": "nvmf_tgt_poll_group_000", 00:17:57.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:57.731 "listen_address": { 00:17:57.731 "trtype": "TCP", 00:17:57.731 "adrfam": "IPv4", 00:17:57.731 "traddr": "10.0.0.2", 00:17:57.731 "trsvcid": "4420" 00:17:57.731 }, 00:17:57.731 "peer_address": { 00:17:57.731 "trtype": "TCP", 00:17:57.731 "adrfam": "IPv4", 00:17:57.731 "traddr": "10.0.0.1", 00:17:57.731 "trsvcid": "56022" 00:17:57.731 }, 00:17:57.731 "auth": { 00:17:57.731 "state": "completed", 00:17:57.731 "digest": "sha512", 00:17:57.731 "dhgroup": "ffdhe4096" 00:17:57.731 } 00:17:57.731 } 00:17:57.731 ]' 00:17:57.731 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.731 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.731 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.991 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.991 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.991 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.991 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.991 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.991 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:57.991 14:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.933 14:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.506 00:17:59.506 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.506 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.506 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.506 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.506 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.506 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.506 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.506 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.506 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.506 { 00:17:59.506 "cntlid": 129, 00:17:59.506 "qid": 0, 00:17:59.506 "state": "enabled", 00:17:59.506 "thread": "nvmf_tgt_poll_group_000", 00:17:59.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:59.506 "listen_address": { 00:17:59.506 "trtype": "TCP", 00:17:59.506 "adrfam": "IPv4", 00:17:59.506 "traddr": "10.0.0.2", 00:17:59.506 "trsvcid": "4420" 00:17:59.506 }, 00:17:59.506 "peer_address": { 00:17:59.506 "trtype": "TCP", 00:17:59.506 "adrfam": "IPv4", 00:17:59.506 "traddr": "10.0.0.1", 00:17:59.506 "trsvcid": "56058" 00:17:59.506 }, 00:17:59.506 "auth": { 00:17:59.506 "state": "completed", 00:17:59.506 "digest": "sha512", 00:17:59.506 "dhgroup": "ffdhe6144" 00:17:59.506 } 00:17:59.506 } 00:17:59.506 ]' 00:17:59.506 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.767 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.767 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.767 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.767 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.767 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.767 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.767 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.028 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:18:00.029 14:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:18:00.600 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.600 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:00.600 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.600 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.600 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.600 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.600 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.600 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.861 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:00.861 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.861 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.861 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:00.862 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:00.862 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.862 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.862 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.862 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.862 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.862 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.862 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.862 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.434 00:18:01.434 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.434 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.434 14:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.434 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.434 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.434 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.434 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.434 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.434 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.434 { 00:18:01.434 "cntlid": 131, 00:18:01.434 "qid": 0, 00:18:01.434 "state": "enabled", 00:18:01.434 "thread": "nvmf_tgt_poll_group_000", 00:18:01.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:01.434 "listen_address": { 00:18:01.434 "trtype": "TCP", 00:18:01.434 "adrfam": "IPv4", 00:18:01.434 "traddr": "10.0.0.2", 00:18:01.434 "trsvcid": "4420" 00:18:01.434 }, 00:18:01.434 "peer_address": { 00:18:01.434 "trtype": "TCP", 00:18:01.434 "adrfam": "IPv4", 00:18:01.434 "traddr": "10.0.0.1", 00:18:01.434 "trsvcid": "49486" 00:18:01.434 }, 00:18:01.434 "auth": { 00:18:01.434 "state": "completed", 00:18:01.434 "digest": "sha512", 00:18:01.434 "dhgroup": "ffdhe6144" 00:18:01.434 } 00:18:01.434 } 00:18:01.434 ]' 00:18:01.434 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.434 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.434 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.434 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:01.694 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.694 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.694 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.694 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.694 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:18:01.694 14:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.636 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.212 00:18:03.212 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.212 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.212 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.212 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.212 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.212 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.212 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.212 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.213 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.213 { 00:18:03.213 "cntlid": 133, 00:18:03.213 "qid": 0, 00:18:03.213 "state": "enabled", 00:18:03.213 "thread": "nvmf_tgt_poll_group_000", 00:18:03.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:03.213 "listen_address": { 00:18:03.213 "trtype": "TCP", 00:18:03.213 "adrfam": "IPv4", 00:18:03.213 "traddr": "10.0.0.2", 00:18:03.213 "trsvcid": "4420" 00:18:03.213 }, 00:18:03.213 "peer_address": { 00:18:03.213 "trtype": "TCP", 00:18:03.213 "adrfam": "IPv4", 00:18:03.213 "traddr": "10.0.0.1", 00:18:03.213 "trsvcid": "49514" 00:18:03.213 }, 00:18:03.213 "auth": { 00:18:03.213 "state": "completed", 00:18:03.213 "digest": "sha512", 00:18:03.213 "dhgroup": "ffdhe6144" 00:18:03.213 } 00:18:03.213 } 00:18:03.213 ]' 00:18:03.213 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.213 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.213 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.537 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.537 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.537 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.537 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.537 14:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.537 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:18:03.537 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:18:04.223 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.538 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:04.538 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.538 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.538 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.538 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.538 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.538 14:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.538 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:04.538 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.538 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.538 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:04.538 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.538 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.538 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:04.538 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.538 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.538 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.538 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.538 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.538 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.858 00:18:04.858 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.858 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.858 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.122 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.122 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.122 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.122 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.122 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.122 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.122 { 00:18:05.122 "cntlid": 135, 00:18:05.122 "qid": 0, 00:18:05.122 "state": "enabled", 00:18:05.122 "thread": "nvmf_tgt_poll_group_000", 00:18:05.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:05.122 "listen_address": { 00:18:05.122 "trtype": "TCP", 00:18:05.122 "adrfam": "IPv4", 00:18:05.122 "traddr": "10.0.0.2", 00:18:05.122 "trsvcid": "4420" 00:18:05.122 }, 00:18:05.122 "peer_address": { 00:18:05.122 "trtype": "TCP", 00:18:05.122 "adrfam": "IPv4", 00:18:05.122 "traddr": "10.0.0.1", 00:18:05.122 "trsvcid": "49534" 00:18:05.122 }, 00:18:05.122 "auth": { 00:18:05.122 "state": "completed", 00:18:05.122 "digest": "sha512", 00:18:05.122 "dhgroup": "ffdhe6144" 00:18:05.122 } 00:18:05.122 } 00:18:05.122 ]' 00:18:05.122 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.122 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.122 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.122 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.122 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.122 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.122 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.122 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.383 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:18:05.383 14:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.327 14:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.899 00:18:06.899 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.899 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.899 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.160 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.160 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.160 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.160 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.160 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.160 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.160 { 00:18:07.160 "cntlid": 137, 00:18:07.160 "qid": 0, 00:18:07.160 "state": "enabled", 00:18:07.160 "thread": "nvmf_tgt_poll_group_000", 00:18:07.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:07.160 "listen_address": { 00:18:07.160 "trtype": "TCP", 00:18:07.160 "adrfam": "IPv4", 00:18:07.160 "traddr": "10.0.0.2", 00:18:07.160 "trsvcid": "4420" 00:18:07.160 }, 00:18:07.160 "peer_address": { 00:18:07.160 "trtype": "TCP", 00:18:07.160 "adrfam": "IPv4", 00:18:07.160 "traddr": "10.0.0.1", 00:18:07.160 "trsvcid": "49578" 00:18:07.160 }, 00:18:07.160 "auth": { 00:18:07.160 "state": "completed", 00:18:07.160 "digest": "sha512", 00:18:07.160 "dhgroup": "ffdhe8192" 00:18:07.160 } 00:18:07.160 } 00:18:07.160 ]' 00:18:07.160 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.160 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.160 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.160 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:07.160 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.160 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.160 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.160 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.426 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:18:07.426 14:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.368 14:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.939 00:18:08.939 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.939 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.939 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.939 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.939 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.939 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.939 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.940 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.940 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.940 { 00:18:08.940 "cntlid": 139, 00:18:08.940 "qid": 0, 00:18:08.940 "state": "enabled", 00:18:08.940 "thread": "nvmf_tgt_poll_group_000", 00:18:08.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:08.940 "listen_address": { 00:18:08.940 "trtype": "TCP", 00:18:08.940 "adrfam": "IPv4", 00:18:08.940 "traddr": "10.0.0.2", 00:18:08.940 "trsvcid": "4420" 00:18:08.940 }, 00:18:08.940 "peer_address": { 00:18:08.940 "trtype": "TCP", 00:18:08.940 "adrfam": "IPv4", 00:18:08.940 "traddr": "10.0.0.1", 00:18:08.940 "trsvcid": "49610" 00:18:08.940 }, 00:18:08.940 "auth": { 00:18:08.940 "state": "completed", 00:18:08.940 "digest": "sha512", 00:18:08.940 "dhgroup": "ffdhe8192" 00:18:08.940 } 00:18:08.940 } 00:18:08.940 ]' 00:18:09.200 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.200 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.200 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.200 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.200 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.200 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.200 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.200 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.461 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:18:09.461 14:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: --dhchap-ctrl-secret DHHC-1:02:OWVkNzBiZjE5NDE4NjhhNTYyMGVhZDU5ZWRmZDkyZDNlNDM0NDUyMWJmOTA2YjYzsHc8qg==: 00:18:10.033 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.033 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:10.033 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.033 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.033 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.033 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.033 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.033 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.294 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:10.294 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.294 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:10.294 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:10.294 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:10.294 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.294 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.294 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.294 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.294 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.294 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.294 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.294 14:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.867 00:18:10.867 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.867 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.867 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.128 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.128 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.128 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.128 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.128 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.128 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.128 { 00:18:11.128 "cntlid": 141, 00:18:11.128 "qid": 0, 00:18:11.128 "state": "enabled", 00:18:11.128 "thread": "nvmf_tgt_poll_group_000", 00:18:11.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:11.128 "listen_address": { 00:18:11.128 "trtype": "TCP", 00:18:11.128 "adrfam": "IPv4", 00:18:11.128 "traddr": "10.0.0.2", 00:18:11.128 "trsvcid": "4420" 00:18:11.128 }, 00:18:11.128 "peer_address": { 00:18:11.128 "trtype": "TCP", 00:18:11.128 "adrfam": "IPv4", 00:18:11.128 "traddr": "10.0.0.1", 00:18:11.128 "trsvcid": "36332" 00:18:11.128 }, 00:18:11.128 "auth": { 00:18:11.128 "state": "completed", 00:18:11.128 "digest": "sha512", 00:18:11.128 "dhgroup": "ffdhe8192" 00:18:11.128 } 00:18:11.128 } 00:18:11.128 ]' 00:18:11.128 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.128 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.128 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.128 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.128 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.128 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.128 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.128 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.389 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:18:11.389 14:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:01:MjA0YzA3NWNjMGFlYzA1MzAyNjJjNzc3ODFiYzMyNTCUFf2M: 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.334 14:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.905 00:18:12.905 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.905 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.905 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.166 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.166 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.166 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.166 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.166 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.166 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.166 { 00:18:13.166 "cntlid": 143, 00:18:13.166 "qid": 0, 00:18:13.166 "state": "enabled", 00:18:13.166 "thread": "nvmf_tgt_poll_group_000", 00:18:13.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:13.166 "listen_address": { 00:18:13.166 "trtype": "TCP", 00:18:13.166 "adrfam": "IPv4", 00:18:13.166 "traddr": "10.0.0.2", 00:18:13.166 "trsvcid": "4420" 00:18:13.166 }, 00:18:13.166 "peer_address": { 00:18:13.166 "trtype": "TCP", 00:18:13.166 "adrfam": "IPv4", 00:18:13.166 "traddr": "10.0.0.1", 00:18:13.166 "trsvcid": "36372" 00:18:13.166 }, 00:18:13.166 "auth": { 00:18:13.166 "state": "completed", 00:18:13.166 "digest": "sha512", 00:18:13.166 "dhgroup": "ffdhe8192" 00:18:13.166 } 00:18:13.166 } 00:18:13.166 ]' 00:18:13.166 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.166 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.166 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.166 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.167 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.167 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.167 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.167 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.428 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:18:13.428 14:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:18:13.999 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.999 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:13.999 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.999 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.999 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.999 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:13.999 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:13.999 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:13.999 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:13.999 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:13.999 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:14.261 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:14.261 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.261 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.261 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:14.261 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:14.261 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.261 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.261 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.261 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.261 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.261 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.261 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.261 14:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.832 00:18:14.833 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.833 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.833 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.833 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.833 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.833 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.833 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.833 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.833 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.833 { 00:18:14.833 "cntlid": 145, 00:18:14.833 "qid": 0, 00:18:14.833 "state": "enabled", 00:18:14.833 "thread": "nvmf_tgt_poll_group_000", 00:18:14.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:14.833 "listen_address": { 00:18:14.833 "trtype": "TCP", 00:18:14.833 "adrfam": "IPv4", 00:18:14.833 "traddr": "10.0.0.2", 00:18:14.833 "trsvcid": "4420" 00:18:14.833 }, 00:18:14.833 "peer_address": { 00:18:14.833 "trtype": "TCP", 00:18:14.833 "adrfam": "IPv4", 00:18:14.833 "traddr": "10.0.0.1", 00:18:14.833 "trsvcid": "36394" 00:18:14.833 }, 00:18:14.833 "auth": { 00:18:14.833 "state": "completed", 00:18:14.833 "digest": "sha512", 00:18:14.833 "dhgroup": "ffdhe8192" 00:18:14.833 } 00:18:14.833 } 00:18:14.833 ]' 00:18:14.833 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.093 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.093 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.093 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.093 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.093 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.093 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.093 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.354 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:18:15.354 14:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:Y2Y5MjAxOWM2N2EwN2ZlZTA0OTg2MjgyM2VhZjQ2MWU2OWY3ZGE0OTJhMzRlMzRh/iBJUQ==: --dhchap-ctrl-secret DHHC-1:03:MzkyYWIyZDM0ODNlMWRhNjQxYzViNmEzOTNmMTVjNjQ3MTdhYzkxYWFmYWVmOGE5MGFiNzNmYWFiN2NkOWZjZr3RYkI=: 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:15.925 14:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:16.497 request: 00:18:16.497 { 00:18:16.497 "name": "nvme0", 00:18:16.498 "trtype": "tcp", 00:18:16.498 "traddr": "10.0.0.2", 00:18:16.498 "adrfam": "ipv4", 00:18:16.498 "trsvcid": "4420", 00:18:16.498 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:16.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:16.498 "prchk_reftag": false, 00:18:16.498 "prchk_guard": false, 00:18:16.498 "hdgst": false, 00:18:16.498 "ddgst": false, 00:18:16.498 "dhchap_key": "key2", 00:18:16.498 "allow_unrecognized_csi": false, 00:18:16.498 "method": "bdev_nvme_attach_controller", 00:18:16.498 "req_id": 1 00:18:16.498 } 00:18:16.498 Got JSON-RPC error response 00:18:16.498 response: 00:18:16.498 { 00:18:16.498 "code": -5, 00:18:16.498 "message": "Input/output error" 00:18:16.498 } 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.498 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:17.070 request: 00:18:17.070 { 00:18:17.070 "name": "nvme0", 00:18:17.070 "trtype": "tcp", 00:18:17.070 "traddr": "10.0.0.2", 00:18:17.070 "adrfam": "ipv4", 00:18:17.070 "trsvcid": "4420", 00:18:17.070 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:17.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:17.070 "prchk_reftag": false, 00:18:17.070 "prchk_guard": false, 00:18:17.070 "hdgst": false, 00:18:17.070 "ddgst": false, 00:18:17.070 "dhchap_key": "key1", 00:18:17.070 "dhchap_ctrlr_key": "ckey2", 00:18:17.070 "allow_unrecognized_csi": false, 00:18:17.070 "method": "bdev_nvme_attach_controller", 00:18:17.070 "req_id": 1 00:18:17.070 } 00:18:17.070 Got JSON-RPC error response 00:18:17.070 response: 00:18:17.070 { 00:18:17.070 "code": -5, 00:18:17.070 "message": "Input/output error" 00:18:17.070 } 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.070 14:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.655 request: 00:18:17.655 { 00:18:17.655 "name": "nvme0", 00:18:17.655 "trtype": "tcp", 00:18:17.655 "traddr": "10.0.0.2", 00:18:17.655 "adrfam": "ipv4", 00:18:17.655 "trsvcid": "4420", 00:18:17.655 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:17.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:17.655 "prchk_reftag": false, 00:18:17.655 "prchk_guard": false, 00:18:17.655 "hdgst": false, 00:18:17.655 "ddgst": false, 00:18:17.655 "dhchap_key": "key1", 00:18:17.655 "dhchap_ctrlr_key": "ckey1", 00:18:17.655 "allow_unrecognized_csi": false, 00:18:17.655 "method": "bdev_nvme_attach_controller", 00:18:17.655 "req_id": 1 00:18:17.655 } 00:18:17.655 Got JSON-RPC error response 00:18:17.655 response: 00:18:17.655 { 00:18:17.655 "code": -5, 00:18:17.655 "message": "Input/output error" 00:18:17.655 } 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3361489 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3361489 ']' 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3361489 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3361489 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3361489' 00:18:17.655 killing process with pid 3361489 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3361489 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3361489 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=3389297 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 3389297 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3389297 ']' 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:17.655 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.914 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:17.914 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:17.914 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:17.914 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:17.914 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.914 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.914 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:17.914 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3389297 00:18:17.915 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3389297 ']' 00:18:17.915 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.915 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:17.915 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.915 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:17.915 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.175 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.175 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:18.175 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:18.175 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.175 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.175 null0 00:18:18.175 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.175 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:18.175 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gO9 00:18:18.175 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.175 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.175 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.175 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.RaW ]] 00:18:18.175 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RaW 00:18:18.175 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.iyo 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.E7i ]] 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.E7i 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.uWk 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.7i4 ]] 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7i4 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.6ZA 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.176 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.437 14:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.378 nvme0n1 00:18:19.378 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.378 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.378 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.378 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.378 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.378 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.378 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.378 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.378 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.378 { 00:18:19.378 "cntlid": 1, 00:18:19.378 "qid": 0, 00:18:19.378 "state": "enabled", 00:18:19.378 "thread": "nvmf_tgt_poll_group_000", 00:18:19.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:19.378 "listen_address": { 00:18:19.378 "trtype": "TCP", 00:18:19.378 "adrfam": "IPv4", 00:18:19.378 "traddr": "10.0.0.2", 00:18:19.378 "trsvcid": "4420" 00:18:19.378 }, 00:18:19.378 "peer_address": { 00:18:19.378 "trtype": "TCP", 00:18:19.378 "adrfam": "IPv4", 00:18:19.378 "traddr": "10.0.0.1", 00:18:19.378 "trsvcid": "36436" 00:18:19.378 }, 00:18:19.378 "auth": { 00:18:19.378 "state": "completed", 00:18:19.378 "digest": "sha512", 00:18:19.378 "dhgroup": "ffdhe8192" 00:18:19.378 } 00:18:19.378 } 00:18:19.378 ]' 00:18:19.378 14:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.378 14:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.378 14:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.378 14:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.378 14:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.378 14:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.378 14:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.637 14:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.637 14:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:18:19.637 14:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.575 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:20.835 request: 00:18:20.835 { 00:18:20.835 "name": "nvme0", 00:18:20.835 "trtype": "tcp", 00:18:20.835 "traddr": "10.0.0.2", 00:18:20.835 "adrfam": "ipv4", 00:18:20.835 "trsvcid": "4420", 00:18:20.835 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:20.835 "prchk_reftag": false, 00:18:20.835 "prchk_guard": false, 00:18:20.835 "hdgst": false, 00:18:20.835 "ddgst": false, 00:18:20.835 "dhchap_key": "key3", 00:18:20.835 "allow_unrecognized_csi": false, 00:18:20.835 "method": "bdev_nvme_attach_controller", 00:18:20.835 "req_id": 1 00:18:20.835 } 00:18:20.835 Got JSON-RPC error response 00:18:20.835 response: 00:18:20.835 { 00:18:20.835 "code": -5, 00:18:20.835 "message": "Input/output error" 00:18:20.835 } 00:18:20.835 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:20.835 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.835 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.835 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.835 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:20.835 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:20.835 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:20.835 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:21.095 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:21.095 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:21.095 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:21.095 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:21.095 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.095 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:21.095 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.096 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:21.096 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.096 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.096 request: 00:18:21.096 { 00:18:21.096 "name": "nvme0", 00:18:21.096 "trtype": "tcp", 00:18:21.096 "traddr": "10.0.0.2", 00:18:21.096 "adrfam": "ipv4", 00:18:21.096 "trsvcid": "4420", 00:18:21.096 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:21.096 "prchk_reftag": false, 00:18:21.096 "prchk_guard": false, 00:18:21.096 "hdgst": false, 00:18:21.096 "ddgst": false, 00:18:21.096 "dhchap_key": "key3", 00:18:21.096 "allow_unrecognized_csi": false, 00:18:21.096 "method": "bdev_nvme_attach_controller", 00:18:21.096 "req_id": 1 00:18:21.096 } 00:18:21.096 Got JSON-RPC error response 00:18:21.096 response: 00:18:21.096 { 00:18:21.096 "code": -5, 00:18:21.096 "message": "Input/output error" 00:18:21.096 } 00:18:21.096 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:21.096 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.096 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.096 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.096 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:21.096 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:21.096 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:21.096 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:21.096 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:21.096 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:21.356 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:21.356 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.356 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.356 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.356 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:21.356 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.356 14:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.356 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.356 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:21.356 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:21.356 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:21.356 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:21.356 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.356 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:21.356 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:21.356 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:21.356 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:21.356 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:21.616 request: 00:18:21.616 { 00:18:21.616 "name": "nvme0", 00:18:21.616 "trtype": "tcp", 00:18:21.616 "traddr": "10.0.0.2", 00:18:21.616 "adrfam": "ipv4", 00:18:21.616 "trsvcid": "4420", 00:18:21.616 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:21.616 "prchk_reftag": false, 00:18:21.616 "prchk_guard": false, 00:18:21.616 "hdgst": false, 00:18:21.616 "ddgst": false, 00:18:21.616 "dhchap_key": "key0", 00:18:21.616 "dhchap_ctrlr_key": "key1", 00:18:21.616 "allow_unrecognized_csi": false, 00:18:21.616 "method": "bdev_nvme_attach_controller", 00:18:21.616 "req_id": 1 00:18:21.616 } 00:18:21.616 Got JSON-RPC error response 00:18:21.616 response: 00:18:21.616 { 00:18:21.616 "code": -5, 00:18:21.616 "message": "Input/output error" 00:18:21.616 } 00:18:21.616 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:21.616 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.616 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.616 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.616 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:21.616 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:21.616 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:21.877 nvme0n1 00:18:21.877 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:21.877 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:21.877 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.139 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.139 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.139 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.399 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:18:22.399 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.399 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.399 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.399 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:22.399 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:22.399 14:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:23.341 nvme0n1 00:18:23.341 14:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:23.341 14:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:23.341 14:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.341 14:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.341 14:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:23.341 14:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.341 14:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.341 14:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.341 14:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:23.341 14:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:23.341 14:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.602 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.602 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:18:23.602 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: --dhchap-ctrl-secret DHHC-1:03:MGM1OWNiNWE5YTRkMjE1MzBiZjdjZjdiZDUwN2UwZDI4ZDk0ODViNjc5OGM3M2Y4MGIwMzRkZDJiNDQwNTIyOSElVmY=: 00:18:24.173 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:24.173 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:24.173 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:24.173 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:24.173 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:24.173 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:24.173 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:24.173 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.173 14:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.433 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:24.433 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:24.433 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:24.433 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:24.433 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.433 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:24.433 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.433 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:24.433 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:24.433 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:25.004 request: 00:18:25.004 { 00:18:25.004 "name": "nvme0", 00:18:25.004 "trtype": "tcp", 00:18:25.004 "traddr": "10.0.0.2", 00:18:25.004 "adrfam": "ipv4", 00:18:25.004 "trsvcid": "4420", 00:18:25.004 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:25.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:25.004 "prchk_reftag": false, 00:18:25.004 "prchk_guard": false, 00:18:25.004 "hdgst": false, 00:18:25.004 "ddgst": false, 00:18:25.004 "dhchap_key": "key1", 00:18:25.004 "allow_unrecognized_csi": false, 00:18:25.004 "method": "bdev_nvme_attach_controller", 00:18:25.004 "req_id": 1 00:18:25.004 } 00:18:25.004 Got JSON-RPC error response 00:18:25.004 response: 00:18:25.004 { 00:18:25.004 "code": -5, 00:18:25.004 "message": "Input/output error" 00:18:25.004 } 00:18:25.004 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:25.004 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:25.004 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:25.004 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:25.004 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:25.004 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:25.004 14:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:25.946 nvme0n1 00:18:25.946 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:25.946 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:25.946 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.946 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.946 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.946 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.206 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:26.206 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.206 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.206 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.206 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:26.206 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:26.206 14:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:26.467 nvme0n1 00:18:26.467 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:26.467 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:26.467 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: '' 2s 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: ]] 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MGNmMjFjMzBiZWE1N2FkYTg4NDhiMmVmNTU1OWFmNjV3h9ZG: 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:26.728 14:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: 2s 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: ]] 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzZmNTdkY2ViZWE4OGVmYWIwMzAzNWJjNDRiMzVmZDljNTQ2NDBkMDBkZmQ2ZmFhX3APDw==: 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:29.271 14:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:31.187 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:31.187 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:31.187 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:31.187 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:31.187 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:31.188 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:31.188 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:31.188 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.188 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:31.188 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.188 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.188 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.188 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:31.188 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:31.188 14:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:31.759 nvme0n1 00:18:31.759 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.759 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.759 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.759 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.759 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.759 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:32.331 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:32.331 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:32.331 14:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.592 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.592 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:32.592 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.592 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.592 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.592 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:32.592 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:32.592 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:32.592 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:32.592 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.852 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.852 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:32.852 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.852 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.852 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.853 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:32.853 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:32.853 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:32.853 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:32.853 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.853 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:32.853 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.853 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:32.853 14:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:33.425 request: 00:18:33.425 { 00:18:33.425 "name": "nvme0", 00:18:33.425 "dhchap_key": "key1", 00:18:33.425 "dhchap_ctrlr_key": "key3", 00:18:33.425 "method": "bdev_nvme_set_keys", 00:18:33.425 "req_id": 1 00:18:33.425 } 00:18:33.425 Got JSON-RPC error response 00:18:33.425 response: 00:18:33.425 { 00:18:33.425 "code": -13, 00:18:33.425 "message": "Permission denied" 00:18:33.425 } 00:18:33.425 14:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:33.425 14:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:33.425 14:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:33.425 14:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:33.425 14:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:33.425 14:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:33.425 14:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.686 14:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:33.686 14:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:34.628 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:34.628 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:34.628 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.889 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:34.889 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:34.889 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.889 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.889 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.889 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:34.889 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:34.889 14:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:35.830 nvme0n1 00:18:35.830 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:35.830 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.830 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.830 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.830 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:35.830 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:35.830 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:35.830 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:35.830 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.830 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:35.830 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.830 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:35.830 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:36.091 request: 00:18:36.091 { 00:18:36.091 "name": "nvme0", 00:18:36.091 "dhchap_key": "key2", 00:18:36.091 "dhchap_ctrlr_key": "key0", 00:18:36.091 "method": "bdev_nvme_set_keys", 00:18:36.091 "req_id": 1 00:18:36.091 } 00:18:36.091 Got JSON-RPC error response 00:18:36.091 response: 00:18:36.091 { 00:18:36.091 "code": -13, 00:18:36.091 "message": "Permission denied" 00:18:36.091 } 00:18:36.091 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:36.091 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:36.091 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:36.091 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:36.091 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:36.091 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:36.091 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.351 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:36.351 14:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:37.291 14:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:37.291 14:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:37.291 14:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.551 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:37.551 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:37.551 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:37.551 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3361661 00:18:37.551 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3361661 ']' 00:18:37.551 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3361661 00:18:37.551 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:37.551 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.551 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3361661 00:18:37.551 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:37.551 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:37.551 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3361661' 00:18:37.551 killing process with pid 3361661 00:18:37.551 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3361661 00:18:37.551 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3361661 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:37.811 rmmod nvme_tcp 00:18:37.811 rmmod nvme_fabrics 00:18:37.811 rmmod nvme_keyring 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 3389297 ']' 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 3389297 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3389297 ']' 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3389297 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3389297 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3389297' 00:18:37.811 killing process with pid 3389297 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3389297 00:18:37.811 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3389297 00:18:38.071 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:38.071 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:38.071 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:38.071 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:38.071 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:18:38.071 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:38.071 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:18:38.071 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:38.071 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:38.071 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.071 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.071 14:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.982 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:39.982 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.gO9 /tmp/spdk.key-sha256.iyo /tmp/spdk.key-sha384.uWk /tmp/spdk.key-sha512.6ZA /tmp/spdk.key-sha512.RaW /tmp/spdk.key-sha384.E7i /tmp/spdk.key-sha256.7i4 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:39.982 00:18:39.982 real 2m44.709s 00:18:39.982 user 6m6.892s 00:18:39.982 sys 0m24.429s 00:18:39.982 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:39.982 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.982 ************************************ 00:18:39.982 END TEST nvmf_auth_target 00:18:39.982 ************************************ 00:18:40.243 14:32:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:40.244 ************************************ 00:18:40.244 START TEST nvmf_bdevio_no_huge 00:18:40.244 ************************************ 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:40.244 * Looking for test storage... 00:18:40.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.244 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:40.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.506 --rc genhtml_branch_coverage=1 00:18:40.506 --rc genhtml_function_coverage=1 00:18:40.506 --rc genhtml_legend=1 00:18:40.506 --rc geninfo_all_blocks=1 00:18:40.506 --rc geninfo_unexecuted_blocks=1 00:18:40.506 00:18:40.506 ' 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:40.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.506 --rc genhtml_branch_coverage=1 00:18:40.506 --rc genhtml_function_coverage=1 00:18:40.506 --rc genhtml_legend=1 00:18:40.506 --rc geninfo_all_blocks=1 00:18:40.506 --rc geninfo_unexecuted_blocks=1 00:18:40.506 00:18:40.506 ' 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:40.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.506 --rc genhtml_branch_coverage=1 00:18:40.506 --rc genhtml_function_coverage=1 00:18:40.506 --rc genhtml_legend=1 00:18:40.506 --rc geninfo_all_blocks=1 00:18:40.506 --rc geninfo_unexecuted_blocks=1 00:18:40.506 00:18:40.506 ' 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:40.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.506 --rc genhtml_branch_coverage=1 00:18:40.506 --rc genhtml_function_coverage=1 00:18:40.506 --rc genhtml_legend=1 00:18:40.506 --rc geninfo_all_blocks=1 00:18:40.506 --rc geninfo_unexecuted_blocks=1 00:18:40.506 00:18:40.506 ' 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.506 14:32:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.506 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:40.506 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:40.506 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.506 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.506 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.506 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.506 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:40.506 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.506 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.506 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.506 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.506 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.506 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:40.507 14:32:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:48.649 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:48.649 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:48.649 Found net devices under 0000:31:00.0: cvl_0_0 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:48.649 Found net devices under 0000:31:00.1: cvl_0_1 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.649 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:48.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:18:48.650 00:18:48.650 --- 10.0.0.2 ping statistics --- 00:18:48.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.650 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:18:48.650 00:18:48.650 --- 10.0.0.1 ping statistics --- 00:18:48.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.650 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=3397555 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 3397555 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3397555 ']' 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:48.650 14:32:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.650 [2024-10-14 14:32:28.635646] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:18:48.650 [2024-10-14 14:32:28.635714] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:48.650 [2024-10-14 14:32:28.732518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:48.650 [2024-10-14 14:32:28.792099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.650 [2024-10-14 14:32:28.792142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.650 [2024-10-14 14:32:28.792151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.650 [2024-10-14 14:32:28.792158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.650 [2024-10-14 14:32:28.792165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.650 [2024-10-14 14:32:28.794058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:48.650 [2024-10-14 14:32:28.794217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:48.650 [2024-10-14 14:32:28.794376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.650 [2024-10-14 14:32:28.794376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.910 [2024-10-14 14:32:29.514835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.910 Malloc0 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.910 [2024-10-14 14:32:29.568621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:48.910 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:48.911 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:18:48.911 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:18:48.911 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:48.911 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:48.911 { 00:18:48.911 "params": { 00:18:48.911 "name": "Nvme$subsystem", 00:18:48.911 "trtype": "$TEST_TRANSPORT", 00:18:48.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:48.911 "adrfam": "ipv4", 00:18:48.911 "trsvcid": "$NVMF_PORT", 00:18:48.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:48.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:48.911 "hdgst": ${hdgst:-false}, 00:18:48.911 "ddgst": ${ddgst:-false} 00:18:48.911 }, 00:18:48.911 "method": "bdev_nvme_attach_controller" 00:18:48.911 } 00:18:48.911 EOF 00:18:48.911 )") 00:18:48.911 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:18:48.911 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:18:48.911 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:18:48.911 14:32:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:48.911 "params": { 00:18:48.911 "name": "Nvme1", 00:18:48.911 "trtype": "tcp", 00:18:48.911 "traddr": "10.0.0.2", 00:18:48.911 "adrfam": "ipv4", 00:18:48.911 "trsvcid": "4420", 00:18:48.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:48.911 "hdgst": false, 00:18:48.911 "ddgst": false 00:18:48.911 }, 00:18:48.911 "method": "bdev_nvme_attach_controller" 00:18:48.911 }' 00:18:48.911 [2024-10-14 14:32:29.626137] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:18:48.911 [2024-10-14 14:32:29.626211] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3397877 ] 00:18:49.171 [2024-10-14 14:32:29.698288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:49.171 [2024-10-14 14:32:29.755093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.171 [2024-10-14 14:32:29.755164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.171 [2024-10-14 14:32:29.755168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.431 I/O targets: 00:18:49.431 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:49.431 00:18:49.431 00:18:49.431 CUnit - A unit testing framework for C - Version 2.1-3 00:18:49.431 http://cunit.sourceforge.net/ 00:18:49.431 00:18:49.431 00:18:49.431 Suite: bdevio tests on: Nvme1n1 00:18:49.431 Test: blockdev write read block ...passed 00:18:49.431 Test: blockdev write zeroes read block ...passed 00:18:49.431 Test: blockdev write zeroes read no split ...passed 00:18:49.431 Test: blockdev write zeroes read split ...passed 00:18:49.431 Test: blockdev write zeroes read split partial ...passed 00:18:49.431 Test: blockdev reset ...[2024-10-14 14:32:30.128239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:49.431 [2024-10-14 14:32:30.128307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b120b0 (9): Bad file descriptor 00:18:49.691 [2024-10-14 14:32:30.237938] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:49.691 passed 00:18:49.691 Test: blockdev write read 8 blocks ...passed 00:18:49.691 Test: blockdev write read size > 128k ...passed 00:18:49.691 Test: blockdev write read invalid size ...passed 00:18:49.691 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:49.691 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:49.691 Test: blockdev write read max offset ...passed 00:18:49.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:49.952 Test: blockdev writev readv 8 blocks ...passed 00:18:49.952 Test: blockdev writev readv 30 x 1block ...passed 00:18:49.952 Test: blockdev writev readv block ...passed 00:18:49.952 Test: blockdev writev readv size > 128k ...passed 00:18:49.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:49.952 Test: blockdev comparev and writev ...[2024-10-14 14:32:30.544844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.952 [2024-10-14 14:32:30.544871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.952 [2024-10-14 14:32:30.544883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.952 [2024-10-14 14:32:30.544889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:49.952 [2024-10-14 14:32:30.545381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.952 [2024-10-14 14:32:30.545394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:49.952 [2024-10-14 14:32:30.545405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.952 [2024-10-14 14:32:30.545411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:49.952 [2024-10-14 14:32:30.545888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.953 [2024-10-14 14:32:30.545897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:49.953 [2024-10-14 14:32:30.545907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.953 [2024-10-14 14:32:30.545913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:49.953 [2024-10-14 14:32:30.546409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.953 [2024-10-14 14:32:30.546418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:49.953 [2024-10-14 14:32:30.546427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.953 [2024-10-14 14:32:30.546432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:49.953 passed 00:18:49.953 Test: blockdev nvme passthru rw ...passed 00:18:49.953 Test: blockdev nvme passthru vendor specific ...[2024-10-14 14:32:30.631939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:49.953 [2024-10-14 14:32:30.631950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:49.953 [2024-10-14 14:32:30.632205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:49.953 [2024-10-14 14:32:30.632213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:49.953 [2024-10-14 14:32:30.632472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:49.953 [2024-10-14 14:32:30.632479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:49.953 [2024-10-14 14:32:30.632716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:49.953 [2024-10-14 14:32:30.632723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:49.953 passed 00:18:49.953 Test: blockdev nvme admin passthru ...passed 00:18:50.213 Test: blockdev copy ...passed 00:18:50.213 00:18:50.213 Run Summary: Type Total Ran Passed Failed Inactive 00:18:50.213 suites 1 1 n/a 0 0 00:18:50.213 tests 23 23 23 0 0 00:18:50.213 asserts 152 152 152 0 n/a 00:18:50.213 00:18:50.213 Elapsed time = 1.470 seconds 00:18:50.474 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.474 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.474 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.474 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.474 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:50.474 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:50.474 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:50.474 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:50.474 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:50.474 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:50.474 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:50.474 14:32:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:50.474 rmmod nvme_tcp 00:18:50.474 rmmod nvme_fabrics 00:18:50.474 rmmod nvme_keyring 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 3397555 ']' 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 3397555 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3397555 ']' 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3397555 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3397555 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3397555' 00:18:50.474 killing process with pid 3397555 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3397555 00:18:50.474 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3397555 00:18:50.735 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:50.735 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:50.735 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:50.735 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:50.735 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:18:50.735 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:50.735 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:18:50.735 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:50.735 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:50.735 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.735 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:50.735 14:32:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:53.282 00:18:53.282 real 0m12.747s 00:18:53.282 user 0m14.929s 00:18:53.282 sys 0m6.683s 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.282 ************************************ 00:18:53.282 END TEST nvmf_bdevio_no_huge 00:18:53.282 ************************************ 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:53.282 ************************************ 00:18:53.282 START TEST nvmf_tls 00:18:53.282 ************************************ 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:53.282 * Looking for test storage... 00:18:53.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:53.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.282 --rc genhtml_branch_coverage=1 00:18:53.282 --rc genhtml_function_coverage=1 00:18:53.282 --rc genhtml_legend=1 00:18:53.282 --rc geninfo_all_blocks=1 00:18:53.282 --rc geninfo_unexecuted_blocks=1 00:18:53.282 00:18:53.282 ' 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:53.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.282 --rc genhtml_branch_coverage=1 00:18:53.282 --rc genhtml_function_coverage=1 00:18:53.282 --rc genhtml_legend=1 00:18:53.282 --rc geninfo_all_blocks=1 00:18:53.282 --rc geninfo_unexecuted_blocks=1 00:18:53.282 00:18:53.282 ' 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:53.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.282 --rc genhtml_branch_coverage=1 00:18:53.282 --rc genhtml_function_coverage=1 00:18:53.282 --rc genhtml_legend=1 00:18:53.282 --rc geninfo_all_blocks=1 00:18:53.282 --rc geninfo_unexecuted_blocks=1 00:18:53.282 00:18:53.282 ' 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:53.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.282 --rc genhtml_branch_coverage=1 00:18:53.282 --rc genhtml_function_coverage=1 00:18:53.282 --rc genhtml_legend=1 00:18:53.282 --rc geninfo_all_blocks=1 00:18:53.282 --rc geninfo_unexecuted_blocks=1 00:18:53.282 00:18:53.282 ' 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.282 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:53.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:53.283 14:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:01.430 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:01.430 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:01.430 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:01.431 Found net devices under 0000:31:00.0: cvl_0_0 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:01.431 Found net devices under 0000:31:00.1: cvl_0_1 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:01.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:19:01.431 00:19:01.431 --- 10.0.0.2 ping statistics --- 00:19:01.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.431 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:19:01.431 00:19:01.431 --- 10.0.0.1 ping statistics --- 00:19:01.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.431 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3402528 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3402528 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3402528 ']' 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.431 14:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.431 [2024-10-14 14:32:41.525145] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:19:01.431 [2024-10-14 14:32:41.525213] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.431 [2024-10-14 14:32:41.615744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.431 [2024-10-14 14:32:41.666747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.431 [2024-10-14 14:32:41.666792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.431 [2024-10-14 14:32:41.666800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.431 [2024-10-14 14:32:41.666807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.431 [2024-10-14 14:32:41.666813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.431 [2024-10-14 14:32:41.667546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.693 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:01.693 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:01.693 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:01.693 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.693 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.693 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.693 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:01.693 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:01.955 true 00:19:01.955 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:01.955 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:02.217 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:02.217 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:02.217 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:02.217 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:02.217 14:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:02.514 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:02.514 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:02.514 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:02.842 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:02.842 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:02.842 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:02.842 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:02.842 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:02.842 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:03.161 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:03.161 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:03.161 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:03.161 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.161 14:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:03.493 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:03.493 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:03.493 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:03.493 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:03.493 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:03.754 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.RPRZuciqW3 00:19:04.014 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:04.014 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.MYhGH3L4Y7 00:19:04.014 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:04.014 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:04.014 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.RPRZuciqW3 00:19:04.014 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.MYhGH3L4Y7 00:19:04.014 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:04.014 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:04.274 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.RPRZuciqW3 00:19:04.274 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.RPRZuciqW3 00:19:04.274 14:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:04.535 [2024-10-14 14:32:45.053610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.535 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:04.535 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:04.795 [2024-10-14 14:32:45.386423] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:04.795 [2024-10-14 14:32:45.386611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.795 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:05.055 malloc0 00:19:05.055 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:05.055 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.RPRZuciqW3 00:19:05.315 14:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:05.575 14:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.RPRZuciqW3 00:19:15.568 Initializing NVMe Controllers 00:19:15.568 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:15.568 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:15.568 Initialization complete. Launching workers. 00:19:15.568 ======================================================== 00:19:15.568 Latency(us) 00:19:15.568 Device Information : IOPS MiB/s Average min max 00:19:15.568 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18581.19 72.58 3444.37 1197.30 4241.69 00:19:15.568 ======================================================== 00:19:15.568 Total : 18581.19 72.58 3444.37 1197.30 4241.69 00:19:15.568 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RPRZuciqW3 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RPRZuciqW3 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3405351 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3405351 /var/tmp/bdevperf.sock 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3405351 ']' 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.568 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.568 [2024-10-14 14:32:56.221706] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:19:15.568 [2024-10-14 14:32:56.221765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405351 ] 00:19:15.568 [2024-10-14 14:32:56.273950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.829 [2024-10-14 14:32:56.303075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.829 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.829 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:15.829 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RPRZuciqW3 00:19:16.091 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:16.091 [2024-10-14 14:32:56.712344] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.091 TLSTESTn1 00:19:16.091 14:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:16.352 Running I/O for 10 seconds... 00:19:18.234 6024.00 IOPS, 23.53 MiB/s [2024-10-14T12:32:59.903Z] 6049.50 IOPS, 23.63 MiB/s [2024-10-14T12:33:01.290Z] 6003.67 IOPS, 23.45 MiB/s [2024-10-14T12:33:02.233Z] 5914.00 IOPS, 23.10 MiB/s [2024-10-14T12:33:03.175Z] 6044.60 IOPS, 23.61 MiB/s [2024-10-14T12:33:04.118Z] 6056.50 IOPS, 23.66 MiB/s [2024-10-14T12:33:05.060Z] 5950.71 IOPS, 23.24 MiB/s [2024-10-14T12:33:06.004Z] 5997.88 IOPS, 23.43 MiB/s [2024-10-14T12:33:06.946Z] 6032.33 IOPS, 23.56 MiB/s [2024-10-14T12:33:06.946Z] 6048.70 IOPS, 23.63 MiB/s 00:19:26.219 Latency(us) 00:19:26.219 [2024-10-14T12:33:06.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.219 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:26.219 Verification LBA range: start 0x0 length 0x2000 00:19:26.219 TLSTESTn1 : 10.01 6054.21 23.65 0.00 0.00 21113.03 4696.75 49152.00 00:19:26.219 [2024-10-14T12:33:06.946Z] =================================================================================================================== 00:19:26.219 [2024-10-14T12:33:06.946Z] Total : 6054.21 23.65 0.00 0.00 21113.03 4696.75 49152.00 00:19:26.219 { 00:19:26.219 "results": [ 00:19:26.219 { 00:19:26.219 "job": "TLSTESTn1", 00:19:26.219 "core_mask": "0x4", 00:19:26.219 "workload": "verify", 00:19:26.219 "status": "finished", 00:19:26.219 "verify_range": { 00:19:26.219 "start": 0, 00:19:26.219 "length": 8192 00:19:26.219 }, 00:19:26.219 "queue_depth": 128, 00:19:26.219 "io_size": 4096, 00:19:26.219 "runtime": 10.011869, 00:19:26.219 "iops": 6054.214253102992, 00:19:26.219 "mibps": 23.649274426183563, 00:19:26.219 "io_failed": 0, 00:19:26.219 "io_timeout": 0, 00:19:26.219 "avg_latency_us": 21113.027089451283, 00:19:26.219 "min_latency_us": 4696.746666666667, 00:19:26.219 "max_latency_us": 49152.0 00:19:26.219 } 00:19:26.219 ], 00:19:26.219 "core_count": 1 00:19:26.219 } 00:19:26.219 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:26.219 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3405351 00:19:26.219 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3405351 ']' 00:19:26.219 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3405351 00:19:26.219 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:26.480 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.480 14:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3405351 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3405351' 00:19:26.480 killing process with pid 3405351 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3405351 00:19:26.480 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.480 00:19:26.480 Latency(us) 00:19:26.480 [2024-10-14T12:33:07.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.480 [2024-10-14T12:33:07.207Z] =================================================================================================================== 00:19:26.480 [2024-10-14T12:33:07.207Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3405351 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MYhGH3L4Y7 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MYhGH3L4Y7 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MYhGH3L4Y7 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MYhGH3L4Y7 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3407568 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3407568 /var/tmp/bdevperf.sock 00:19:26.480 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:26.481 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3407568 ']' 00:19:26.481 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.481 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:26.481 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.481 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:26.481 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.481 [2024-10-14 14:33:07.173506] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:19:26.481 [2024-10-14 14:33:07.173564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407568 ] 00:19:26.742 [2024-10-14 14:33:07.228640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.742 [2024-10-14 14:33:07.257991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.742 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.742 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:26.742 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MYhGH3L4Y7 00:19:27.003 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:27.003 [2024-10-14 14:33:07.675408] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.003 [2024-10-14 14:33:07.683643] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:27.004 [2024-10-14 14:33:07.684552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7ca20 (107): Transport endpoint is not connected 00:19:27.004 [2024-10-14 14:33:07.685548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe7ca20 (9): Bad file descriptor 00:19:27.004 [2024-10-14 14:33:07.686550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:27.004 [2024-10-14 14:33:07.686557] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:27.004 [2024-10-14 14:33:07.686564] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:27.004 [2024-10-14 14:33:07.686572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:27.004 request: 00:19:27.004 { 00:19:27.004 "name": "TLSTEST", 00:19:27.004 "trtype": "tcp", 00:19:27.004 "traddr": "10.0.0.2", 00:19:27.004 "adrfam": "ipv4", 00:19:27.004 "trsvcid": "4420", 00:19:27.004 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.004 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.004 "prchk_reftag": false, 00:19:27.004 "prchk_guard": false, 00:19:27.004 "hdgst": false, 00:19:27.004 "ddgst": false, 00:19:27.004 "psk": "key0", 00:19:27.004 "allow_unrecognized_csi": false, 00:19:27.004 "method": "bdev_nvme_attach_controller", 00:19:27.004 "req_id": 1 00:19:27.004 } 00:19:27.004 Got JSON-RPC error response 00:19:27.004 response: 00:19:27.004 { 00:19:27.004 "code": -5, 00:19:27.004 "message": "Input/output error" 00:19:27.004 } 00:19:27.004 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3407568 00:19:27.004 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3407568 ']' 00:19:27.004 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3407568 00:19:27.004 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:27.004 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.004 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3407568 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3407568' 00:19:27.265 killing process with pid 3407568 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3407568 00:19:27.265 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.265 00:19:27.265 Latency(us) 00:19:27.265 [2024-10-14T12:33:07.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.265 [2024-10-14T12:33:07.992Z] =================================================================================================================== 00:19:27.265 [2024-10-14T12:33:07.992Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3407568 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RPRZuciqW3 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RPRZuciqW3 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RPRZuciqW3 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RPRZuciqW3 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3407809 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3407809 /var/tmp/bdevperf.sock 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3407809 ']' 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:27.265 14:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.265 [2024-10-14 14:33:07.929537] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:19:27.265 [2024-10-14 14:33:07.929594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3407809 ] 00:19:27.265 [2024-10-14 14:33:07.982106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.525 [2024-10-14 14:33:08.010599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:27.525 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:27.525 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:27.525 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RPRZuciqW3 00:19:27.785 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:27.785 [2024-10-14 14:33:08.419950] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:27.785 [2024-10-14 14:33:08.425350] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:27.785 [2024-10-14 14:33:08.425369] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:27.785 [2024-10-14 14:33:08.425391] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:27.785 [2024-10-14 14:33:08.426122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192ba20 (107): Transport endpoint is not connected 00:19:27.785 [2024-10-14 14:33:08.427117] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192ba20 (9): Bad file descriptor 00:19:27.785 [2024-10-14 14:33:08.428119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:27.785 [2024-10-14 14:33:08.428127] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:27.785 [2024-10-14 14:33:08.428134] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:27.785 [2024-10-14 14:33:08.428141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:27.785 request: 00:19:27.785 { 00:19:27.785 "name": "TLSTEST", 00:19:27.785 "trtype": "tcp", 00:19:27.785 "traddr": "10.0.0.2", 00:19:27.785 "adrfam": "ipv4", 00:19:27.785 "trsvcid": "4420", 00:19:27.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.785 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:27.785 "prchk_reftag": false, 00:19:27.785 "prchk_guard": false, 00:19:27.785 "hdgst": false, 00:19:27.785 "ddgst": false, 00:19:27.785 "psk": "key0", 00:19:27.785 "allow_unrecognized_csi": false, 00:19:27.785 "method": "bdev_nvme_attach_controller", 00:19:27.785 "req_id": 1 00:19:27.785 } 00:19:27.785 Got JSON-RPC error response 00:19:27.785 response: 00:19:27.785 { 00:19:27.785 "code": -5, 00:19:27.785 "message": "Input/output error" 00:19:27.785 } 00:19:27.785 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3407809 00:19:27.785 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3407809 ']' 00:19:27.785 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3407809 00:19:27.785 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:27.785 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.785 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3407809 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3407809' 00:19:28.046 killing process with pid 3407809 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3407809 00:19:28.046 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.046 00:19:28.046 Latency(us) 00:19:28.046 [2024-10-14T12:33:08.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.046 [2024-10-14T12:33:08.773Z] =================================================================================================================== 00:19:28.046 [2024-10-14T12:33:08.773Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3407809 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RPRZuciqW3 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RPRZuciqW3 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RPRZuciqW3 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RPRZuciqW3 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3408025 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3408025 /var/tmp/bdevperf.sock 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3408025 ']' 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.046 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.046 [2024-10-14 14:33:08.675360] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:19:28.046 [2024-10-14 14:33:08.675417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3408025 ] 00:19:28.046 [2024-10-14 14:33:08.728103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.046 [2024-10-14 14:33:08.756932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.306 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.306 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:28.306 14:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RPRZuciqW3 00:19:28.306 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.566 [2024-10-14 14:33:09.182976] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.566 [2024-10-14 14:33:09.187287] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:28.566 [2024-10-14 14:33:09.187306] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:28.566 [2024-10-14 14:33:09.187324] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:28.566 [2024-10-14 14:33:09.187984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e0a20 (107): Transport endpoint is not connected 00:19:28.566 [2024-10-14 14:33:09.188979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e0a20 (9): Bad file descriptor 00:19:28.566 [2024-10-14 14:33:09.189980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:28.566 [2024-10-14 14:33:09.189988] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:28.566 [2024-10-14 14:33:09.189995] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:28.566 [2024-10-14 14:33:09.190003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:28.566 request: 00:19:28.566 { 00:19:28.566 "name": "TLSTEST", 00:19:28.566 "trtype": "tcp", 00:19:28.566 "traddr": "10.0.0.2", 00:19:28.566 "adrfam": "ipv4", 00:19:28.566 "trsvcid": "4420", 00:19:28.566 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:28.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.566 "prchk_reftag": false, 00:19:28.566 "prchk_guard": false, 00:19:28.566 "hdgst": false, 00:19:28.566 "ddgst": false, 00:19:28.566 "psk": "key0", 00:19:28.566 "allow_unrecognized_csi": false, 00:19:28.566 "method": "bdev_nvme_attach_controller", 00:19:28.566 "req_id": 1 00:19:28.566 } 00:19:28.566 Got JSON-RPC error response 00:19:28.566 response: 00:19:28.566 { 00:19:28.566 "code": -5, 00:19:28.566 "message": "Input/output error" 00:19:28.566 } 00:19:28.566 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3408025 00:19:28.566 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3408025 ']' 00:19:28.566 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3408025 00:19:28.566 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:28.566 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:28.566 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3408025 00:19:28.566 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:28.566 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:28.566 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3408025' 00:19:28.566 killing process with pid 3408025 00:19:28.566 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3408025 00:19:28.566 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.566 00:19:28.566 Latency(us) 00:19:28.566 [2024-10-14T12:33:09.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.566 [2024-10-14T12:33:09.293Z] =================================================================================================================== 00:19:28.566 [2024-10-14T12:33:09.293Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.566 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3408025 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3408457 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3408457 /var/tmp/bdevperf.sock 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3408457 ']' 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.827 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.827 [2024-10-14 14:33:09.441611] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:19:28.827 [2024-10-14 14:33:09.441669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3408457 ] 00:19:28.827 [2024-10-14 14:33:09.494606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.827 [2024-10-14 14:33:09.522978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.088 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.088 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:29.088 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:29.088 [2024-10-14 14:33:09.756113] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:29.088 [2024-10-14 14:33:09.756140] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:29.088 request: 00:19:29.088 { 00:19:29.088 "name": "key0", 00:19:29.088 "path": "", 00:19:29.088 "method": "keyring_file_add_key", 00:19:29.088 "req_id": 1 00:19:29.088 } 00:19:29.088 Got JSON-RPC error response 00:19:29.088 response: 00:19:29.088 { 00:19:29.088 "code": -1, 00:19:29.088 "message": "Operation not permitted" 00:19:29.088 } 00:19:29.088 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:29.349 [2024-10-14 14:33:09.936643] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.349 [2024-10-14 14:33:09.936668] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:29.349 request: 00:19:29.349 { 00:19:29.349 "name": "TLSTEST", 00:19:29.349 "trtype": "tcp", 00:19:29.349 "traddr": "10.0.0.2", 00:19:29.349 "adrfam": "ipv4", 00:19:29.349 "trsvcid": "4420", 00:19:29.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.349 "prchk_reftag": false, 00:19:29.349 "prchk_guard": false, 00:19:29.349 "hdgst": false, 00:19:29.349 "ddgst": false, 00:19:29.349 "psk": "key0", 00:19:29.349 "allow_unrecognized_csi": false, 00:19:29.349 "method": "bdev_nvme_attach_controller", 00:19:29.349 "req_id": 1 00:19:29.349 } 00:19:29.349 Got JSON-RPC error response 00:19:29.349 response: 00:19:29.349 { 00:19:29.349 "code": -126, 00:19:29.349 "message": "Required key not available" 00:19:29.349 } 00:19:29.349 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3408457 00:19:29.349 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3408457 ']' 00:19:29.349 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3408457 00:19:29.349 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:29.349 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.349 14:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3408457 00:19:29.349 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:29.349 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:29.349 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3408457' 00:19:29.349 killing process with pid 3408457 00:19:29.349 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3408457 00:19:29.349 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.349 00:19:29.349 Latency(us) 00:19:29.349 [2024-10-14T12:33:10.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.349 [2024-10-14T12:33:10.076Z] =================================================================================================================== 00:19:29.349 [2024-10-14T12:33:10.076Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:29.349 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3408457 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3402528 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3402528 ']' 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3402528 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3402528 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3402528' 00:19:29.610 killing process with pid 3402528 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3402528 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3402528 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:19:29.610 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:29.870 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:29.870 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:29.870 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.po0y6q2UhN 00:19:29.870 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:29.870 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.po0y6q2UhN 00:19:29.870 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:29.871 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:29.871 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:29.871 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.871 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3408632 00:19:29.871 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3408632 00:19:29.871 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:29.871 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3408632 ']' 00:19:29.871 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.871 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:29.871 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.871 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:29.871 14:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.871 [2024-10-14 14:33:10.426865] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:19:29.871 [2024-10-14 14:33:10.426926] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.871 [2024-10-14 14:33:10.514758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.871 [2024-10-14 14:33:10.546021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.871 [2024-10-14 14:33:10.546053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.871 [2024-10-14 14:33:10.546069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.871 [2024-10-14 14:33:10.546076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.871 [2024-10-14 14:33:10.546080] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.871 [2024-10-14 14:33:10.546611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.811 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:30.811 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:30.812 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:30.812 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:30.812 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.812 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.812 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.po0y6q2UhN 00:19:30.812 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.po0y6q2UhN 00:19:30.812 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:30.812 [2024-10-14 14:33:11.407272] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.812 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:31.073 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:31.073 [2024-10-14 14:33:11.728053] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:31.073 [2024-10-14 14:33:11.728231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.073 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:31.334 malloc0 00:19:31.334 14:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:31.334 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.po0y6q2UhN 00:19:31.595 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.po0y6q2UhN 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.po0y6q2UhN 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3409141 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3409141 /var/tmp/bdevperf.sock 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3409141 ']' 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.857 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.857 [2024-10-14 14:33:12.443229] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:19:31.857 [2024-10-14 14:33:12.443286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3409141 ] 00:19:31.857 [2024-10-14 14:33:12.496051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.857 [2024-10-14 14:33:12.524991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.118 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.118 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:32.118 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.po0y6q2UhN 00:19:32.118 14:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:32.378 [2024-10-14 14:33:12.971245] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.378 TLSTESTn1 00:19:32.378 14:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:32.638 Running I/O for 10 seconds... 00:19:34.520 6109.00 IOPS, 23.86 MiB/s [2024-10-14T12:33:16.187Z] 5594.00 IOPS, 21.85 MiB/s [2024-10-14T12:33:17.569Z] 5264.00 IOPS, 20.56 MiB/s [2024-10-14T12:33:18.508Z] 5307.00 IOPS, 20.73 MiB/s [2024-10-14T12:33:19.449Z] 5380.80 IOPS, 21.02 MiB/s [2024-10-14T12:33:20.390Z] 5352.67 IOPS, 20.91 MiB/s [2024-10-14T12:33:21.332Z] 5460.43 IOPS, 21.33 MiB/s [2024-10-14T12:33:22.274Z] 5566.38 IOPS, 21.74 MiB/s [2024-10-14T12:33:23.216Z] 5636.78 IOPS, 22.02 MiB/s [2024-10-14T12:33:23.216Z] 5625.30 IOPS, 21.97 MiB/s 00:19:42.490 Latency(us) 00:19:42.490 [2024-10-14T12:33:23.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.490 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:42.490 Verification LBA range: start 0x0 length 0x2000 00:19:42.490 TLSTESTn1 : 10.02 5624.05 21.97 0.00 0.00 22722.99 5980.16 27852.80 00:19:42.490 [2024-10-14T12:33:23.217Z] =================================================================================================================== 00:19:42.490 [2024-10-14T12:33:23.217Z] Total : 5624.05 21.97 0.00 0.00 22722.99 5980.16 27852.80 00:19:42.490 { 00:19:42.490 "results": [ 00:19:42.490 { 00:19:42.490 "job": "TLSTESTn1", 00:19:42.490 "core_mask": "0x4", 00:19:42.490 "workload": "verify", 00:19:42.490 "status": "finished", 00:19:42.490 "verify_range": { 00:19:42.490 "start": 0, 00:19:42.490 "length": 8192 00:19:42.490 }, 00:19:42.490 "queue_depth": 128, 00:19:42.490 "io_size": 4096, 00:19:42.490 "runtime": 10.024988, 00:19:42.490 "iops": 5624.046632275271, 00:19:42.490 "mibps": 21.968932157325277, 00:19:42.490 "io_failed": 0, 00:19:42.490 "io_timeout": 0, 00:19:42.490 "avg_latency_us": 22722.992440715847, 00:19:42.490 "min_latency_us": 5980.16, 00:19:42.490 "max_latency_us": 27852.8 00:19:42.490 } 00:19:42.490 ], 00:19:42.490 "core_count": 1 00:19:42.490 } 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3409141 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3409141 ']' 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3409141 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3409141 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3409141' 00:19:42.751 killing process with pid 3409141 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3409141 00:19:42.751 Received shutdown signal, test time was about 10.000000 seconds 00:19:42.751 00:19:42.751 Latency(us) 00:19:42.751 [2024-10-14T12:33:23.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.751 [2024-10-14T12:33:23.478Z] =================================================================================================================== 00:19:42.751 [2024-10-14T12:33:23.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3409141 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.po0y6q2UhN 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.po0y6q2UhN 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.po0y6q2UhN 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.po0y6q2UhN 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.po0y6q2UhN 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3411330 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3411330 /var/tmp/bdevperf.sock 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3411330 ']' 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:42.751 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.751 [2024-10-14 14:33:23.459243] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:19:42.751 [2024-10-14 14:33:23.459303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3411330 ] 00:19:43.013 [2024-10-14 14:33:23.511017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.013 [2024-10-14 14:33:23.540886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.013 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.013 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:43.013 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.po0y6q2UhN 00:19:43.274 [2024-10-14 14:33:23.773527] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.po0y6q2UhN': 0100666 00:19:43.274 [2024-10-14 14:33:23.773553] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:43.274 request: 00:19:43.274 { 00:19:43.274 "name": "key0", 00:19:43.274 "path": "/tmp/tmp.po0y6q2UhN", 00:19:43.274 "method": "keyring_file_add_key", 00:19:43.274 "req_id": 1 00:19:43.274 } 00:19:43.274 Got JSON-RPC error response 00:19:43.274 response: 00:19:43.274 { 00:19:43.274 "code": -1, 00:19:43.274 "message": "Operation not permitted" 00:19:43.274 } 00:19:43.274 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:43.274 [2024-10-14 14:33:23.950037] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.274 [2024-10-14 14:33:23.950058] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:43.274 request: 00:19:43.274 { 00:19:43.274 "name": "TLSTEST", 00:19:43.274 "trtype": "tcp", 00:19:43.274 "traddr": "10.0.0.2", 00:19:43.274 "adrfam": "ipv4", 00:19:43.274 "trsvcid": "4420", 00:19:43.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.274 "prchk_reftag": false, 00:19:43.274 "prchk_guard": false, 00:19:43.274 "hdgst": false, 00:19:43.274 "ddgst": false, 00:19:43.274 "psk": "key0", 00:19:43.274 "allow_unrecognized_csi": false, 00:19:43.274 "method": "bdev_nvme_attach_controller", 00:19:43.274 "req_id": 1 00:19:43.274 } 00:19:43.274 Got JSON-RPC error response 00:19:43.274 response: 00:19:43.274 { 00:19:43.274 "code": -126, 00:19:43.274 "message": "Required key not available" 00:19:43.274 } 00:19:43.274 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3411330 00:19:43.274 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3411330 ']' 00:19:43.274 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3411330 00:19:43.274 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:43.274 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.274 14:33:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3411330 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3411330' 00:19:43.536 killing process with pid 3411330 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3411330 00:19:43.536 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.536 00:19:43.536 Latency(us) 00:19:43.536 [2024-10-14T12:33:24.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.536 [2024-10-14T12:33:24.263Z] =================================================================================================================== 00:19:43.536 [2024-10-14T12:33:24.263Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3411330 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3408632 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3408632 ']' 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3408632 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3408632 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3408632' 00:19:43.536 killing process with pid 3408632 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3408632 00:19:43.536 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3408632 00:19:43.797 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:43.797 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:43.797 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:43.797 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.797 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3411407 00:19:43.797 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3411407 00:19:43.797 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:43.797 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3411407 ']' 00:19:43.797 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.797 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.797 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.797 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.797 14:33:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.797 [2024-10-14 14:33:24.370271] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:19:43.797 [2024-10-14 14:33:24.370329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.797 [2024-10-14 14:33:24.456662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.797 [2024-10-14 14:33:24.488306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.797 [2024-10-14 14:33:24.488340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.797 [2024-10-14 14:33:24.488346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.797 [2024-10-14 14:33:24.488350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.798 [2024-10-14 14:33:24.488355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.798 [2024-10-14 14:33:24.488835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.po0y6q2UhN 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.po0y6q2UhN 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.po0y6q2UhN 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.po0y6q2UhN 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:44.740 [2024-10-14 14:33:25.354420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.740 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:45.000 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:45.000 [2024-10-14 14:33:25.679209] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:45.000 [2024-10-14 14:33:25.679390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.000 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:45.261 malloc0 00:19:45.261 14:33:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:45.522 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.po0y6q2UhN 00:19:45.522 [2024-10-14 14:33:26.149982] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.po0y6q2UhN': 0100666 00:19:45.522 [2024-10-14 14:33:26.150001] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:45.522 request: 00:19:45.522 { 00:19:45.522 "name": "key0", 00:19:45.522 "path": "/tmp/tmp.po0y6q2UhN", 00:19:45.522 "method": "keyring_file_add_key", 00:19:45.522 "req_id": 1 00:19:45.522 } 00:19:45.522 Got JSON-RPC error response 00:19:45.522 response: 00:19:45.522 { 00:19:45.522 "code": -1, 00:19:45.522 "message": "Operation not permitted" 00:19:45.522 } 00:19:45.522 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:45.782 [2024-10-14 14:33:26.318416] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:45.782 [2024-10-14 14:33:26.318439] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:45.782 request: 00:19:45.782 { 00:19:45.782 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.782 "host": "nqn.2016-06.io.spdk:host1", 00:19:45.782 "psk": "key0", 00:19:45.782 "method": "nvmf_subsystem_add_host", 00:19:45.782 "req_id": 1 00:19:45.782 } 00:19:45.782 Got JSON-RPC error response 00:19:45.782 response: 00:19:45.782 { 00:19:45.782 "code": -32603, 00:19:45.782 "message": "Internal error" 00:19:45.782 } 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3411407 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3411407 ']' 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3411407 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3411407 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3411407' 00:19:45.782 killing process with pid 3411407 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3411407 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3411407 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.po0y6q2UhN 00:19:45.782 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:46.042 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:46.042 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:46.042 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.042 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3412012 00:19:46.042 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3412012 00:19:46.042 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:46.042 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3412012 ']' 00:19:46.042 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.042 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.042 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.042 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.042 14:33:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.042 [2024-10-14 14:33:26.570188] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:19:46.042 [2024-10-14 14:33:26.570242] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.042 [2024-10-14 14:33:26.655585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.042 [2024-10-14 14:33:26.686782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.042 [2024-10-14 14:33:26.686817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.042 [2024-10-14 14:33:26.686823] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.042 [2024-10-14 14:33:26.686827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.042 [2024-10-14 14:33:26.686832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.042 [2024-10-14 14:33:26.687339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.982 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.982 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:46.983 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:46.983 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:46.983 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.983 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.983 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.po0y6q2UhN 00:19:46.983 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.po0y6q2UhN 00:19:46.983 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:46.983 [2024-10-14 14:33:27.552728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.983 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:47.243 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:47.243 [2024-10-14 14:33:27.885540] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.243 [2024-10-14 14:33:27.885735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.243 14:33:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:47.502 malloc0 00:19:47.502 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:47.502 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.po0y6q2UhN 00:19:47.762 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.023 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:48.023 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3412411 00:19:48.023 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.023 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3412411 /var/tmp/bdevperf.sock 00:19:48.023 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3412411 ']' 00:19:48.023 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.023 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:48.023 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.023 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:48.023 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.023 [2024-10-14 14:33:28.577124] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:19:48.023 [2024-10-14 14:33:28.577177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412411 ] 00:19:48.023 [2024-10-14 14:33:28.629569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.023 [2024-10-14 14:33:28.659077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.023 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:48.023 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:48.023 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.po0y6q2UhN 00:19:48.283 14:33:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.543 [2024-10-14 14:33:29.076453] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.543 TLSTESTn1 00:19:48.543 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:48.809 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:48.809 "subsystems": [ 00:19:48.809 { 00:19:48.809 "subsystem": "keyring", 00:19:48.809 "config": [ 00:19:48.809 { 00:19:48.809 "method": "keyring_file_add_key", 00:19:48.809 "params": { 00:19:48.809 "name": "key0", 00:19:48.809 "path": "/tmp/tmp.po0y6q2UhN" 00:19:48.809 } 00:19:48.809 } 00:19:48.809 ] 00:19:48.809 }, 00:19:48.809 { 00:19:48.809 "subsystem": "iobuf", 00:19:48.809 "config": [ 00:19:48.809 { 00:19:48.809 "method": "iobuf_set_options", 00:19:48.809 "params": { 00:19:48.809 "small_pool_count": 8192, 00:19:48.809 "large_pool_count": 1024, 00:19:48.809 "small_bufsize": 8192, 00:19:48.809 "large_bufsize": 135168 00:19:48.809 } 00:19:48.809 } 00:19:48.809 ] 00:19:48.809 }, 00:19:48.809 { 00:19:48.809 "subsystem": "sock", 00:19:48.809 "config": [ 00:19:48.809 { 00:19:48.809 "method": "sock_set_default_impl", 00:19:48.809 "params": { 00:19:48.809 "impl_name": "posix" 00:19:48.809 } 00:19:48.809 }, 00:19:48.809 { 00:19:48.809 "method": "sock_impl_set_options", 00:19:48.809 "params": { 00:19:48.809 "impl_name": "ssl", 00:19:48.809 "recv_buf_size": 4096, 00:19:48.809 "send_buf_size": 4096, 00:19:48.809 "enable_recv_pipe": true, 00:19:48.809 "enable_quickack": false, 00:19:48.809 "enable_placement_id": 0, 00:19:48.809 "enable_zerocopy_send_server": true, 00:19:48.809 "enable_zerocopy_send_client": false, 00:19:48.809 "zerocopy_threshold": 0, 00:19:48.810 "tls_version": 0, 00:19:48.810 "enable_ktls": false 00:19:48.810 } 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "method": "sock_impl_set_options", 00:19:48.810 "params": { 00:19:48.810 "impl_name": "posix", 00:19:48.810 "recv_buf_size": 2097152, 00:19:48.810 "send_buf_size": 2097152, 00:19:48.810 "enable_recv_pipe": true, 00:19:48.810 "enable_quickack": false, 00:19:48.810 "enable_placement_id": 0, 00:19:48.810 "enable_zerocopy_send_server": true, 00:19:48.810 "enable_zerocopy_send_client": false, 00:19:48.810 "zerocopy_threshold": 0, 00:19:48.810 "tls_version": 0, 00:19:48.810 "enable_ktls": false 00:19:48.810 } 00:19:48.810 } 00:19:48.810 ] 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "subsystem": "vmd", 00:19:48.810 "config": [] 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "subsystem": "accel", 00:19:48.810 "config": [ 00:19:48.810 { 00:19:48.810 "method": "accel_set_options", 00:19:48.810 "params": { 00:19:48.810 "small_cache_size": 128, 00:19:48.810 "large_cache_size": 16, 00:19:48.810 "task_count": 2048, 00:19:48.810 "sequence_count": 2048, 00:19:48.810 "buf_count": 2048 00:19:48.810 } 00:19:48.810 } 00:19:48.810 ] 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "subsystem": "bdev", 00:19:48.810 "config": [ 00:19:48.810 { 00:19:48.810 "method": "bdev_set_options", 00:19:48.810 "params": { 00:19:48.810 "bdev_io_pool_size": 65535, 00:19:48.810 "bdev_io_cache_size": 256, 00:19:48.810 "bdev_auto_examine": true, 00:19:48.810 "iobuf_small_cache_size": 128, 00:19:48.810 "iobuf_large_cache_size": 16 00:19:48.810 } 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "method": "bdev_raid_set_options", 00:19:48.810 "params": { 00:19:48.810 "process_window_size_kb": 1024, 00:19:48.810 "process_max_bandwidth_mb_sec": 0 00:19:48.810 } 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "method": "bdev_iscsi_set_options", 00:19:48.810 "params": { 00:19:48.810 "timeout_sec": 30 00:19:48.810 } 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "method": "bdev_nvme_set_options", 00:19:48.810 "params": { 00:19:48.810 "action_on_timeout": "none", 00:19:48.810 "timeout_us": 0, 00:19:48.810 "timeout_admin_us": 0, 00:19:48.810 "keep_alive_timeout_ms": 10000, 00:19:48.810 "arbitration_burst": 0, 00:19:48.810 "low_priority_weight": 0, 00:19:48.810 "medium_priority_weight": 0, 00:19:48.810 "high_priority_weight": 0, 00:19:48.810 "nvme_adminq_poll_period_us": 10000, 00:19:48.810 "nvme_ioq_poll_period_us": 0, 00:19:48.810 "io_queue_requests": 0, 00:19:48.810 "delay_cmd_submit": true, 00:19:48.810 "transport_retry_count": 4, 00:19:48.810 "bdev_retry_count": 3, 00:19:48.810 "transport_ack_timeout": 0, 00:19:48.810 "ctrlr_loss_timeout_sec": 0, 00:19:48.810 "reconnect_delay_sec": 0, 00:19:48.810 "fast_io_fail_timeout_sec": 0, 00:19:48.810 "disable_auto_failback": false, 00:19:48.810 "generate_uuids": false, 00:19:48.810 "transport_tos": 0, 00:19:48.810 "nvme_error_stat": false, 00:19:48.810 "rdma_srq_size": 0, 00:19:48.810 "io_path_stat": false, 00:19:48.810 "allow_accel_sequence": false, 00:19:48.810 "rdma_max_cq_size": 0, 00:19:48.810 "rdma_cm_event_timeout_ms": 0, 00:19:48.810 "dhchap_digests": [ 00:19:48.810 "sha256", 00:19:48.810 "sha384", 00:19:48.810 "sha512" 00:19:48.810 ], 00:19:48.810 "dhchap_dhgroups": [ 00:19:48.810 "null", 00:19:48.810 "ffdhe2048", 00:19:48.810 "ffdhe3072", 00:19:48.810 "ffdhe4096", 00:19:48.810 "ffdhe6144", 00:19:48.810 "ffdhe8192" 00:19:48.810 ] 00:19:48.810 } 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "method": "bdev_nvme_set_hotplug", 00:19:48.810 "params": { 00:19:48.810 "period_us": 100000, 00:19:48.810 "enable": false 00:19:48.810 } 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "method": "bdev_malloc_create", 00:19:48.810 "params": { 00:19:48.810 "name": "malloc0", 00:19:48.810 "num_blocks": 8192, 00:19:48.810 "block_size": 4096, 00:19:48.810 "physical_block_size": 4096, 00:19:48.810 "uuid": "de2e3429-057d-4372-be6c-c21ef8352172", 00:19:48.810 "optimal_io_boundary": 0, 00:19:48.810 "md_size": 0, 00:19:48.810 "dif_type": 0, 00:19:48.810 "dif_is_head_of_md": false, 00:19:48.810 "dif_pi_format": 0 00:19:48.810 } 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "method": "bdev_wait_for_examine" 00:19:48.810 } 00:19:48.810 ] 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "subsystem": "nbd", 00:19:48.810 "config": [] 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "subsystem": "scheduler", 00:19:48.810 "config": [ 00:19:48.810 { 00:19:48.810 "method": "framework_set_scheduler", 00:19:48.810 "params": { 00:19:48.810 "name": "static" 00:19:48.810 } 00:19:48.810 } 00:19:48.810 ] 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "subsystem": "nvmf", 00:19:48.810 "config": [ 00:19:48.810 { 00:19:48.810 "method": "nvmf_set_config", 00:19:48.810 "params": { 00:19:48.810 "discovery_filter": "match_any", 00:19:48.810 "admin_cmd_passthru": { 00:19:48.810 "identify_ctrlr": false 00:19:48.810 }, 00:19:48.810 "dhchap_digests": [ 00:19:48.810 "sha256", 00:19:48.810 "sha384", 00:19:48.810 "sha512" 00:19:48.810 ], 00:19:48.810 "dhchap_dhgroups": [ 00:19:48.810 "null", 00:19:48.810 "ffdhe2048", 00:19:48.810 "ffdhe3072", 00:19:48.810 "ffdhe4096", 00:19:48.810 "ffdhe6144", 00:19:48.810 "ffdhe8192" 00:19:48.810 ] 00:19:48.810 } 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "method": "nvmf_set_max_subsystems", 00:19:48.810 "params": { 00:19:48.810 "max_subsystems": 1024 00:19:48.810 } 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "method": "nvmf_set_crdt", 00:19:48.810 "params": { 00:19:48.810 "crdt1": 0, 00:19:48.810 "crdt2": 0, 00:19:48.810 "crdt3": 0 00:19:48.810 } 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "method": "nvmf_create_transport", 00:19:48.810 "params": { 00:19:48.810 "trtype": "TCP", 00:19:48.810 "max_queue_depth": 128, 00:19:48.810 "max_io_qpairs_per_ctrlr": 127, 00:19:48.810 "in_capsule_data_size": 4096, 00:19:48.810 "max_io_size": 131072, 00:19:48.810 "io_unit_size": 131072, 00:19:48.810 "max_aq_depth": 128, 00:19:48.810 "num_shared_buffers": 511, 00:19:48.810 "buf_cache_size": 4294967295, 00:19:48.810 "dif_insert_or_strip": false, 00:19:48.810 "zcopy": false, 00:19:48.810 "c2h_success": false, 00:19:48.810 "sock_priority": 0, 00:19:48.810 "abort_timeout_sec": 1, 00:19:48.810 "ack_timeout": 0, 00:19:48.810 "data_wr_pool_size": 0 00:19:48.810 } 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "method": "nvmf_create_subsystem", 00:19:48.810 "params": { 00:19:48.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.810 "allow_any_host": false, 00:19:48.810 "serial_number": "SPDK00000000000001", 00:19:48.810 "model_number": "SPDK bdev Controller", 00:19:48.810 "max_namespaces": 10, 00:19:48.810 "min_cntlid": 1, 00:19:48.810 "max_cntlid": 65519, 00:19:48.810 "ana_reporting": false 00:19:48.810 } 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "method": "nvmf_subsystem_add_host", 00:19:48.810 "params": { 00:19:48.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.810 "host": "nqn.2016-06.io.spdk:host1", 00:19:48.810 "psk": "key0" 00:19:48.810 } 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "method": "nvmf_subsystem_add_ns", 00:19:48.810 "params": { 00:19:48.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.810 "namespace": { 00:19:48.810 "nsid": 1, 00:19:48.810 "bdev_name": "malloc0", 00:19:48.810 "nguid": "DE2E3429057D4372BE6CC21EF8352172", 00:19:48.810 "uuid": "de2e3429-057d-4372-be6c-c21ef8352172", 00:19:48.810 "no_auto_visible": false 00:19:48.810 } 00:19:48.810 } 00:19:48.810 }, 00:19:48.810 { 00:19:48.810 "method": "nvmf_subsystem_add_listener", 00:19:48.810 "params": { 00:19:48.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.810 "listen_address": { 00:19:48.810 "trtype": "TCP", 00:19:48.810 "adrfam": "IPv4", 00:19:48.810 "traddr": "10.0.0.2", 00:19:48.810 "trsvcid": "4420" 00:19:48.811 }, 00:19:48.811 "secure_channel": true 00:19:48.811 } 00:19:48.811 } 00:19:48.811 ] 00:19:48.811 } 00:19:48.811 ] 00:19:48.811 }' 00:19:48.811 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:49.072 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:49.072 "subsystems": [ 00:19:49.072 { 00:19:49.072 "subsystem": "keyring", 00:19:49.072 "config": [ 00:19:49.072 { 00:19:49.072 "method": "keyring_file_add_key", 00:19:49.072 "params": { 00:19:49.072 "name": "key0", 00:19:49.072 "path": "/tmp/tmp.po0y6q2UhN" 00:19:49.072 } 00:19:49.072 } 00:19:49.072 ] 00:19:49.072 }, 00:19:49.072 { 00:19:49.072 "subsystem": "iobuf", 00:19:49.072 "config": [ 00:19:49.072 { 00:19:49.072 "method": "iobuf_set_options", 00:19:49.072 "params": { 00:19:49.072 "small_pool_count": 8192, 00:19:49.072 "large_pool_count": 1024, 00:19:49.072 "small_bufsize": 8192, 00:19:49.072 "large_bufsize": 135168 00:19:49.072 } 00:19:49.072 } 00:19:49.072 ] 00:19:49.072 }, 00:19:49.072 { 00:19:49.072 "subsystem": "sock", 00:19:49.072 "config": [ 00:19:49.072 { 00:19:49.072 "method": "sock_set_default_impl", 00:19:49.072 "params": { 00:19:49.072 "impl_name": "posix" 00:19:49.072 } 00:19:49.072 }, 00:19:49.072 { 00:19:49.072 "method": "sock_impl_set_options", 00:19:49.072 "params": { 00:19:49.072 "impl_name": "ssl", 00:19:49.072 "recv_buf_size": 4096, 00:19:49.072 "send_buf_size": 4096, 00:19:49.072 "enable_recv_pipe": true, 00:19:49.072 "enable_quickack": false, 00:19:49.072 "enable_placement_id": 0, 00:19:49.072 "enable_zerocopy_send_server": true, 00:19:49.072 "enable_zerocopy_send_client": false, 00:19:49.072 "zerocopy_threshold": 0, 00:19:49.072 "tls_version": 0, 00:19:49.072 "enable_ktls": false 00:19:49.072 } 00:19:49.072 }, 00:19:49.072 { 00:19:49.072 "method": "sock_impl_set_options", 00:19:49.072 "params": { 00:19:49.072 "impl_name": "posix", 00:19:49.072 "recv_buf_size": 2097152, 00:19:49.072 "send_buf_size": 2097152, 00:19:49.072 "enable_recv_pipe": true, 00:19:49.072 "enable_quickack": false, 00:19:49.072 "enable_placement_id": 0, 00:19:49.072 "enable_zerocopy_send_server": true, 00:19:49.072 "enable_zerocopy_send_client": false, 00:19:49.072 "zerocopy_threshold": 0, 00:19:49.072 "tls_version": 0, 00:19:49.072 "enable_ktls": false 00:19:49.072 } 00:19:49.072 } 00:19:49.072 ] 00:19:49.072 }, 00:19:49.072 { 00:19:49.072 "subsystem": "vmd", 00:19:49.072 "config": [] 00:19:49.072 }, 00:19:49.072 { 00:19:49.072 "subsystem": "accel", 00:19:49.072 "config": [ 00:19:49.072 { 00:19:49.072 "method": "accel_set_options", 00:19:49.072 "params": { 00:19:49.072 "small_cache_size": 128, 00:19:49.072 "large_cache_size": 16, 00:19:49.072 "task_count": 2048, 00:19:49.072 "sequence_count": 2048, 00:19:49.072 "buf_count": 2048 00:19:49.072 } 00:19:49.072 } 00:19:49.072 ] 00:19:49.072 }, 00:19:49.072 { 00:19:49.072 "subsystem": "bdev", 00:19:49.072 "config": [ 00:19:49.072 { 00:19:49.073 "method": "bdev_set_options", 00:19:49.073 "params": { 00:19:49.073 "bdev_io_pool_size": 65535, 00:19:49.073 "bdev_io_cache_size": 256, 00:19:49.073 "bdev_auto_examine": true, 00:19:49.073 "iobuf_small_cache_size": 128, 00:19:49.073 "iobuf_large_cache_size": 16 00:19:49.073 } 00:19:49.073 }, 00:19:49.073 { 00:19:49.073 "method": "bdev_raid_set_options", 00:19:49.073 "params": { 00:19:49.073 "process_window_size_kb": 1024, 00:19:49.073 "process_max_bandwidth_mb_sec": 0 00:19:49.073 } 00:19:49.073 }, 00:19:49.073 { 00:19:49.073 "method": "bdev_iscsi_set_options", 00:19:49.073 "params": { 00:19:49.073 "timeout_sec": 30 00:19:49.073 } 00:19:49.073 }, 00:19:49.073 { 00:19:49.073 "method": "bdev_nvme_set_options", 00:19:49.073 "params": { 00:19:49.073 "action_on_timeout": "none", 00:19:49.073 "timeout_us": 0, 00:19:49.073 "timeout_admin_us": 0, 00:19:49.073 "keep_alive_timeout_ms": 10000, 00:19:49.073 "arbitration_burst": 0, 00:19:49.073 "low_priority_weight": 0, 00:19:49.073 "medium_priority_weight": 0, 00:19:49.073 "high_priority_weight": 0, 00:19:49.073 "nvme_adminq_poll_period_us": 10000, 00:19:49.073 "nvme_ioq_poll_period_us": 0, 00:19:49.073 "io_queue_requests": 512, 00:19:49.073 "delay_cmd_submit": true, 00:19:49.073 "transport_retry_count": 4, 00:19:49.073 "bdev_retry_count": 3, 00:19:49.073 "transport_ack_timeout": 0, 00:19:49.073 "ctrlr_loss_timeout_sec": 0, 00:19:49.073 "reconnect_delay_sec": 0, 00:19:49.073 "fast_io_fail_timeout_sec": 0, 00:19:49.073 "disable_auto_failback": false, 00:19:49.073 "generate_uuids": false, 00:19:49.073 "transport_tos": 0, 00:19:49.073 "nvme_error_stat": false, 00:19:49.073 "rdma_srq_size": 0, 00:19:49.073 "io_path_stat": false, 00:19:49.073 "allow_accel_sequence": false, 00:19:49.073 "rdma_max_cq_size": 0, 00:19:49.073 "rdma_cm_event_timeout_ms": 0, 00:19:49.073 "dhchap_digests": [ 00:19:49.073 "sha256", 00:19:49.073 "sha384", 00:19:49.073 "sha512" 00:19:49.073 ], 00:19:49.073 "dhchap_dhgroups": [ 00:19:49.073 "null", 00:19:49.073 "ffdhe2048", 00:19:49.073 "ffdhe3072", 00:19:49.073 "ffdhe4096", 00:19:49.073 "ffdhe6144", 00:19:49.073 "ffdhe8192" 00:19:49.073 ] 00:19:49.073 } 00:19:49.073 }, 00:19:49.073 { 00:19:49.073 "method": "bdev_nvme_attach_controller", 00:19:49.073 "params": { 00:19:49.073 "name": "TLSTEST", 00:19:49.073 "trtype": "TCP", 00:19:49.073 "adrfam": "IPv4", 00:19:49.073 "traddr": "10.0.0.2", 00:19:49.073 "trsvcid": "4420", 00:19:49.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.073 "prchk_reftag": false, 00:19:49.073 "prchk_guard": false, 00:19:49.073 "ctrlr_loss_timeout_sec": 0, 00:19:49.073 "reconnect_delay_sec": 0, 00:19:49.073 "fast_io_fail_timeout_sec": 0, 00:19:49.073 "psk": "key0", 00:19:49.073 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.073 "hdgst": false, 00:19:49.073 "ddgst": false, 00:19:49.073 "multipath": "multipath" 00:19:49.073 } 00:19:49.073 }, 00:19:49.073 { 00:19:49.073 "method": "bdev_nvme_set_hotplug", 00:19:49.073 "params": { 00:19:49.073 "period_us": 100000, 00:19:49.073 "enable": false 00:19:49.073 } 00:19:49.073 }, 00:19:49.073 { 00:19:49.073 "method": "bdev_wait_for_examine" 00:19:49.073 } 00:19:49.073 ] 00:19:49.073 }, 00:19:49.073 { 00:19:49.073 "subsystem": "nbd", 00:19:49.073 "config": [] 00:19:49.073 } 00:19:49.073 ] 00:19:49.073 }' 00:19:49.073 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3412411 00:19:49.073 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3412411 ']' 00:19:49.073 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3412411 00:19:49.073 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:49.073 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:49.073 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3412411 00:19:49.073 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:49.073 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:49.073 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3412411' 00:19:49.073 killing process with pid 3412411 00:19:49.073 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3412411 00:19:49.073 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.073 00:19:49.073 Latency(us) 00:19:49.073 [2024-10-14T12:33:29.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.073 [2024-10-14T12:33:29.800Z] =================================================================================================================== 00:19:49.073 [2024-10-14T12:33:29.800Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:49.073 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3412411 00:19:49.337 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3412012 00:19:49.337 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3412012 ']' 00:19:49.337 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3412012 00:19:49.337 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:49.337 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:49.337 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3412012 00:19:49.337 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:49.337 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:49.337 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3412012' 00:19:49.337 killing process with pid 3412012 00:19:49.337 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3412012 00:19:49.337 14:33:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3412012 00:19:49.337 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:49.337 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:49.337 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:49.337 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.337 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:49.337 "subsystems": [ 00:19:49.337 { 00:19:49.337 "subsystem": "keyring", 00:19:49.337 "config": [ 00:19:49.337 { 00:19:49.337 "method": "keyring_file_add_key", 00:19:49.337 "params": { 00:19:49.337 "name": "key0", 00:19:49.337 "path": "/tmp/tmp.po0y6q2UhN" 00:19:49.337 } 00:19:49.337 } 00:19:49.337 ] 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "subsystem": "iobuf", 00:19:49.337 "config": [ 00:19:49.337 { 00:19:49.337 "method": "iobuf_set_options", 00:19:49.337 "params": { 00:19:49.337 "small_pool_count": 8192, 00:19:49.337 "large_pool_count": 1024, 00:19:49.337 "small_bufsize": 8192, 00:19:49.337 "large_bufsize": 135168 00:19:49.337 } 00:19:49.337 } 00:19:49.337 ] 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "subsystem": "sock", 00:19:49.337 "config": [ 00:19:49.337 { 00:19:49.337 "method": "sock_set_default_impl", 00:19:49.337 "params": { 00:19:49.337 "impl_name": "posix" 00:19:49.337 } 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "method": "sock_impl_set_options", 00:19:49.337 "params": { 00:19:49.337 "impl_name": "ssl", 00:19:49.337 "recv_buf_size": 4096, 00:19:49.337 "send_buf_size": 4096, 00:19:49.337 "enable_recv_pipe": true, 00:19:49.337 "enable_quickack": false, 00:19:49.337 "enable_placement_id": 0, 00:19:49.337 "enable_zerocopy_send_server": true, 00:19:49.337 "enable_zerocopy_send_client": false, 00:19:49.337 "zerocopy_threshold": 0, 00:19:49.337 "tls_version": 0, 00:19:49.337 "enable_ktls": false 00:19:49.337 } 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "method": "sock_impl_set_options", 00:19:49.337 "params": { 00:19:49.337 "impl_name": "posix", 00:19:49.337 "recv_buf_size": 2097152, 00:19:49.337 "send_buf_size": 2097152, 00:19:49.337 "enable_recv_pipe": true, 00:19:49.337 "enable_quickack": false, 00:19:49.337 "enable_placement_id": 0, 00:19:49.337 "enable_zerocopy_send_server": true, 00:19:49.337 "enable_zerocopy_send_client": false, 00:19:49.337 "zerocopy_threshold": 0, 00:19:49.337 "tls_version": 0, 00:19:49.337 "enable_ktls": false 00:19:49.337 } 00:19:49.337 } 00:19:49.337 ] 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "subsystem": "vmd", 00:19:49.337 "config": [] 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "subsystem": "accel", 00:19:49.337 "config": [ 00:19:49.337 { 00:19:49.337 "method": "accel_set_options", 00:19:49.337 "params": { 00:19:49.337 "small_cache_size": 128, 00:19:49.337 "large_cache_size": 16, 00:19:49.337 "task_count": 2048, 00:19:49.337 "sequence_count": 2048, 00:19:49.337 "buf_count": 2048 00:19:49.337 } 00:19:49.337 } 00:19:49.337 ] 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "subsystem": "bdev", 00:19:49.337 "config": [ 00:19:49.337 { 00:19:49.337 "method": "bdev_set_options", 00:19:49.337 "params": { 00:19:49.337 "bdev_io_pool_size": 65535, 00:19:49.337 "bdev_io_cache_size": 256, 00:19:49.337 "bdev_auto_examine": true, 00:19:49.337 "iobuf_small_cache_size": 128, 00:19:49.337 "iobuf_large_cache_size": 16 00:19:49.337 } 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "method": "bdev_raid_set_options", 00:19:49.337 "params": { 00:19:49.337 "process_window_size_kb": 1024, 00:19:49.337 "process_max_bandwidth_mb_sec": 0 00:19:49.337 } 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "method": "bdev_iscsi_set_options", 00:19:49.337 "params": { 00:19:49.337 "timeout_sec": 30 00:19:49.337 } 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "method": "bdev_nvme_set_options", 00:19:49.337 "params": { 00:19:49.337 "action_on_timeout": "none", 00:19:49.337 "timeout_us": 0, 00:19:49.337 "timeout_admin_us": 0, 00:19:49.337 "keep_alive_timeout_ms": 10000, 00:19:49.337 "arbitration_burst": 0, 00:19:49.337 "low_priority_weight": 0, 00:19:49.337 "medium_priority_weight": 0, 00:19:49.337 "high_priority_weight": 0, 00:19:49.337 "nvme_adminq_poll_period_us": 10000, 00:19:49.337 "nvme_ioq_poll_period_us": 0, 00:19:49.337 "io_queue_requests": 0, 00:19:49.337 "delay_cmd_submit": true, 00:19:49.337 "transport_retry_count": 4, 00:19:49.337 "bdev_retry_count": 3, 00:19:49.337 "transport_ack_timeout": 0, 00:19:49.337 "ctrlr_loss_timeout_sec": 0, 00:19:49.337 "reconnect_delay_sec": 0, 00:19:49.337 "fast_io_fail_timeout_sec": 0, 00:19:49.337 "disable_auto_failback": false, 00:19:49.337 "generate_uuids": false, 00:19:49.337 "transport_tos": 0, 00:19:49.337 "nvme_error_stat": false, 00:19:49.337 "rdma_srq_size": 0, 00:19:49.337 "io_path_stat": false, 00:19:49.337 "allow_accel_sequence": false, 00:19:49.337 "rdma_max_cq_size": 0, 00:19:49.337 "rdma_cm_event_timeout_ms": 0, 00:19:49.337 "dhchap_digests": [ 00:19:49.337 "sha256", 00:19:49.337 "sha384", 00:19:49.337 "sha512" 00:19:49.337 ], 00:19:49.337 "dhchap_dhgroups": [ 00:19:49.337 "null", 00:19:49.337 "ffdhe2048", 00:19:49.337 "ffdhe3072", 00:19:49.337 "ffdhe4096", 00:19:49.337 "ffdhe6144", 00:19:49.337 "ffdhe8192" 00:19:49.337 ] 00:19:49.337 } 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "method": "bdev_nvme_set_hotplug", 00:19:49.337 "params": { 00:19:49.337 "period_us": 100000, 00:19:49.337 "enable": false 00:19:49.337 } 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "method": "bdev_malloc_create", 00:19:49.337 "params": { 00:19:49.337 "name": "malloc0", 00:19:49.337 "num_blocks": 8192, 00:19:49.337 "block_size": 4096, 00:19:49.337 "physical_block_size": 4096, 00:19:49.337 "uuid": "de2e3429-057d-4372-be6c-c21ef8352172", 00:19:49.337 "optimal_io_boundary": 0, 00:19:49.337 "md_size": 0, 00:19:49.337 "dif_type": 0, 00:19:49.337 "dif_is_head_of_md": false, 00:19:49.337 "dif_pi_format": 0 00:19:49.337 } 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "method": "bdev_wait_for_examine" 00:19:49.337 } 00:19:49.337 ] 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "subsystem": "nbd", 00:19:49.337 "config": [] 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "subsystem": "scheduler", 00:19:49.337 "config": [ 00:19:49.337 { 00:19:49.337 "method": "framework_set_scheduler", 00:19:49.337 "params": { 00:19:49.337 "name": "static" 00:19:49.337 } 00:19:49.337 } 00:19:49.337 ] 00:19:49.337 }, 00:19:49.337 { 00:19:49.337 "subsystem": "nvmf", 00:19:49.337 "config": [ 00:19:49.337 { 00:19:49.337 "method": "nvmf_set_config", 00:19:49.337 "params": { 00:19:49.337 "discovery_filter": "match_any", 00:19:49.337 "admin_cmd_passthru": { 00:19:49.337 "identify_ctrlr": false 00:19:49.337 }, 00:19:49.337 "dhchap_digests": [ 00:19:49.337 "sha256", 00:19:49.337 "sha384", 00:19:49.337 "sha512" 00:19:49.337 ], 00:19:49.338 "dhchap_dhgroups": [ 00:19:49.338 "null", 00:19:49.338 "ffdhe2048", 00:19:49.338 "ffdhe3072", 00:19:49.338 "ffdhe4096", 00:19:49.338 "ffdhe6144", 00:19:49.338 "ffdhe8192" 00:19:49.338 ] 00:19:49.338 } 00:19:49.338 }, 00:19:49.338 { 00:19:49.338 "method": "nvmf_set_max_subsystems", 00:19:49.338 "params": { 00:19:49.338 "max_subsystems": 1024 00:19:49.338 } 00:19:49.338 }, 00:19:49.338 { 00:19:49.338 "method": "nvmf_set_crdt", 00:19:49.338 "params": { 00:19:49.338 "crdt1": 0, 00:19:49.338 "crdt2": 0, 00:19:49.338 "crdt3": 0 00:19:49.338 } 00:19:49.338 }, 00:19:49.338 { 00:19:49.338 "method": "nvmf_create_transport", 00:19:49.338 "params": { 00:19:49.338 "trtype": "TCP", 00:19:49.338 "max_queue_depth": 128, 00:19:49.338 "max_io_qpairs_per_ctrlr": 127, 00:19:49.338 "in_capsule_data_size": 4096, 00:19:49.338 "max_io_size": 131072, 00:19:49.338 "io_unit_size": 131072, 00:19:49.338 "max_aq_depth": 128, 00:19:49.338 "num_shared_buffers": 511, 00:19:49.338 "buf_cache_size": 4294967295, 00:19:49.338 "dif_insert_or_strip": false, 00:19:49.338 "zcopy": false, 00:19:49.338 "c2h_success": false, 00:19:49.338 "sock_priority": 0, 00:19:49.338 "abort_timeout_sec": 1, 00:19:49.338 "ack_timeout": 0, 00:19:49.338 "data_wr_pool_size": 0 00:19:49.338 } 00:19:49.338 }, 00:19:49.338 { 00:19:49.338 "method": "nvmf_create_subsystem", 00:19:49.338 "params": { 00:19:49.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.338 "allow_any_host": false, 00:19:49.338 "serial_number": "SPDK00000000000001", 00:19:49.338 "model_number": "SPDK bdev Controller", 00:19:49.338 "max_namespaces": 10, 00:19:49.338 "min_cntlid": 1, 00:19:49.338 "max_cntlid": 65519, 00:19:49.338 "ana_reporting": false 00:19:49.338 } 00:19:49.338 }, 00:19:49.338 { 00:19:49.338 "method": "nvmf_subsystem_add_host", 00:19:49.338 "params": { 00:19:49.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.338 "host": "nqn.2016-06.io.spdk:host1", 00:19:49.338 "psk": "key0" 00:19:49.338 } 00:19:49.338 }, 00:19:49.338 { 00:19:49.338 "method": "nvmf_subsystem_add_ns", 00:19:49.338 "params": { 00:19:49.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.338 "namespace": { 00:19:49.338 "nsid": 1, 00:19:49.338 "bdev_name": "malloc0", 00:19:49.338 "nguid": "DE2E3429057D4372BE6CC21EF8352172", 00:19:49.338 "uuid": "de2e3429-057d-4372-be6c-c21ef8352172", 00:19:49.338 "no_auto_visible": false 00:19:49.338 } 00:19:49.338 } 00:19:49.338 }, 00:19:49.338 { 00:19:49.338 "method": "nvmf_subsystem_add_listener", 00:19:49.338 "params": { 00:19:49.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.338 "listen_address": { 00:19:49.338 "trtype": "TCP", 00:19:49.338 "adrfam": "IPv4", 00:19:49.338 "traddr": "10.0.0.2", 00:19:49.338 "trsvcid": "4420" 00:19:49.338 }, 00:19:49.338 "secure_channel": true 00:19:49.338 } 00:19:49.338 } 00:19:49.338 ] 00:19:49.338 } 00:19:49.338 ] 00:19:49.338 }' 00:19:49.338 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3412757 00:19:49.338 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3412757 00:19:49.338 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:49.338 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3412757 ']' 00:19:49.338 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.338 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:49.338 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.338 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:49.338 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.599 [2024-10-14 14:33:30.084949] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:19:49.599 [2024-10-14 14:33:30.085003] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.599 [2024-10-14 14:33:30.167401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.599 [2024-10-14 14:33:30.195817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.599 [2024-10-14 14:33:30.195845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.599 [2024-10-14 14:33:30.195851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.599 [2024-10-14 14:33:30.195855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.599 [2024-10-14 14:33:30.195859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.599 [2024-10-14 14:33:30.196344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.859 [2024-10-14 14:33:30.389213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.859 [2024-10-14 14:33:30.421238] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.859 [2024-10-14 14:33:30.421431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3412789 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3412789 /var/tmp/bdevperf.sock 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3412789 ']' 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.430 14:33:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:50.430 "subsystems": [ 00:19:50.430 { 00:19:50.430 "subsystem": "keyring", 00:19:50.430 "config": [ 00:19:50.430 { 00:19:50.430 "method": "keyring_file_add_key", 00:19:50.430 "params": { 00:19:50.430 "name": "key0", 00:19:50.431 "path": "/tmp/tmp.po0y6q2UhN" 00:19:50.431 } 00:19:50.431 } 00:19:50.431 ] 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "subsystem": "iobuf", 00:19:50.431 "config": [ 00:19:50.431 { 00:19:50.431 "method": "iobuf_set_options", 00:19:50.431 "params": { 00:19:50.431 "small_pool_count": 8192, 00:19:50.431 "large_pool_count": 1024, 00:19:50.431 "small_bufsize": 8192, 00:19:50.431 "large_bufsize": 135168 00:19:50.431 } 00:19:50.431 } 00:19:50.431 ] 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "subsystem": "sock", 00:19:50.431 "config": [ 00:19:50.431 { 00:19:50.431 "method": "sock_set_default_impl", 00:19:50.431 "params": { 00:19:50.431 "impl_name": "posix" 00:19:50.431 } 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "method": "sock_impl_set_options", 00:19:50.431 "params": { 00:19:50.431 "impl_name": "ssl", 00:19:50.431 "recv_buf_size": 4096, 00:19:50.431 "send_buf_size": 4096, 00:19:50.431 "enable_recv_pipe": true, 00:19:50.431 "enable_quickack": false, 00:19:50.431 "enable_placement_id": 0, 00:19:50.431 "enable_zerocopy_send_server": true, 00:19:50.431 "enable_zerocopy_send_client": false, 00:19:50.431 "zerocopy_threshold": 0, 00:19:50.431 "tls_version": 0, 00:19:50.431 "enable_ktls": false 00:19:50.431 } 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "method": "sock_impl_set_options", 00:19:50.431 "params": { 00:19:50.431 "impl_name": "posix", 00:19:50.431 "recv_buf_size": 2097152, 00:19:50.431 "send_buf_size": 2097152, 00:19:50.431 "enable_recv_pipe": true, 00:19:50.431 "enable_quickack": false, 00:19:50.431 "enable_placement_id": 0, 00:19:50.431 "enable_zerocopy_send_server": true, 00:19:50.431 "enable_zerocopy_send_client": false, 00:19:50.431 "zerocopy_threshold": 0, 00:19:50.431 "tls_version": 0, 00:19:50.431 "enable_ktls": false 00:19:50.431 } 00:19:50.431 } 00:19:50.431 ] 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "subsystem": "vmd", 00:19:50.431 "config": [] 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "subsystem": "accel", 00:19:50.431 "config": [ 00:19:50.431 { 00:19:50.431 "method": "accel_set_options", 00:19:50.431 "params": { 00:19:50.431 "small_cache_size": 128, 00:19:50.431 "large_cache_size": 16, 00:19:50.431 "task_count": 2048, 00:19:50.431 "sequence_count": 2048, 00:19:50.431 "buf_count": 2048 00:19:50.431 } 00:19:50.431 } 00:19:50.431 ] 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "subsystem": "bdev", 00:19:50.431 "config": [ 00:19:50.431 { 00:19:50.431 "method": "bdev_set_options", 00:19:50.431 "params": { 00:19:50.431 "bdev_io_pool_size": 65535, 00:19:50.431 "bdev_io_cache_size": 256, 00:19:50.431 "bdev_auto_examine": true, 00:19:50.431 "iobuf_small_cache_size": 128, 00:19:50.431 "iobuf_large_cache_size": 16 00:19:50.431 } 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "method": "bdev_raid_set_options", 00:19:50.431 "params": { 00:19:50.431 "process_window_size_kb": 1024, 00:19:50.431 "process_max_bandwidth_mb_sec": 0 00:19:50.431 } 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "method": "bdev_iscsi_set_options", 00:19:50.431 "params": { 00:19:50.431 "timeout_sec": 30 00:19:50.431 } 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "method": "bdev_nvme_set_options", 00:19:50.431 "params": { 00:19:50.431 "action_on_timeout": "none", 00:19:50.431 "timeout_us": 0, 00:19:50.431 "timeout_admin_us": 0, 00:19:50.431 "keep_alive_timeout_ms": 10000, 00:19:50.431 "arbitration_burst": 0, 00:19:50.431 "low_priority_weight": 0, 00:19:50.431 "medium_priority_weight": 0, 00:19:50.431 "high_priority_weight": 0, 00:19:50.431 "nvme_adminq_poll_period_us": 10000, 00:19:50.431 "nvme_ioq_poll_period_us": 0, 00:19:50.431 "io_queue_requests": 512, 00:19:50.431 "delay_cmd_submit": true, 00:19:50.431 "transport_retry_count": 4, 00:19:50.431 "bdev_retry_count": 3, 00:19:50.431 "transport_ack_timeout": 0, 00:19:50.431 "ctrlr_loss_timeout_sec": 0, 00:19:50.431 "reconnect_delay_sec": 0, 00:19:50.431 "fast_io_fail_timeout_sec": 0, 00:19:50.431 "disable_auto_failback": false, 00:19:50.431 "generate_uuids": false, 00:19:50.431 "transport_tos": 0, 00:19:50.431 "nvme_error_stat": false, 00:19:50.431 "rdma_srq_size": 0, 00:19:50.431 "io_path_stat": false, 00:19:50.431 "allow_accel_sequence": false, 00:19:50.431 "rdma_max_cq_size": 0, 00:19:50.431 "rdma_cm_event_timeout_ms": 0, 00:19:50.431 "dhchap_digests": [ 00:19:50.431 "sha256", 00:19:50.431 "sha384", 00:19:50.431 "sha512" 00:19:50.431 ], 00:19:50.431 "dhchap_dhgroups": [ 00:19:50.431 "null", 00:19:50.431 "ffdhe2048", 00:19:50.431 "ffdhe3072", 00:19:50.431 "ffdhe4096", 00:19:50.431 "ffdhe6144", 00:19:50.431 "ffdhe8192" 00:19:50.431 ] 00:19:50.431 } 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "method": "bdev_nvme_attach_controller", 00:19:50.431 "params": { 00:19:50.431 "name": "TLSTEST", 00:19:50.431 "trtype": "TCP", 00:19:50.431 "adrfam": "IPv4", 00:19:50.431 "traddr": "10.0.0.2", 00:19:50.431 "trsvcid": "4420", 00:19:50.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.431 "prchk_reftag": false, 00:19:50.431 "prchk_guard": false, 00:19:50.431 "ctrlr_loss_timeout_sec": 0, 00:19:50.431 "reconnect_delay_sec": 0, 00:19:50.431 "fast_io_fail_timeout_sec": 0, 00:19:50.431 "psk": "key0", 00:19:50.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.431 "hdgst": false, 00:19:50.431 "ddgst": false, 00:19:50.431 "multipath": "multipath" 00:19:50.431 } 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "method": "bdev_nvme_set_hotplug", 00:19:50.431 "params": { 00:19:50.431 "period_us": 100000, 00:19:50.431 "enable": false 00:19:50.431 } 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "method": "bdev_wait_for_examine" 00:19:50.431 } 00:19:50.431 ] 00:19:50.431 }, 00:19:50.431 { 00:19:50.431 "subsystem": "nbd", 00:19:50.431 "config": [] 00:19:50.431 } 00:19:50.431 ] 00:19:50.431 }' 00:19:50.431 [2024-10-14 14:33:30.968955] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:19:50.431 [2024-10-14 14:33:30.969046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412789 ] 00:19:50.431 [2024-10-14 14:33:31.025388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.431 [2024-10-14 14:33:31.054793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.692 [2024-10-14 14:33:31.189155] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.264 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:51.264 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:51.264 14:33:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:51.264 Running I/O for 10 seconds... 00:19:53.145 5381.00 IOPS, 21.02 MiB/s [2024-10-14T12:33:35.256Z] 5853.00 IOPS, 22.86 MiB/s [2024-10-14T12:33:36.198Z] 5965.67 IOPS, 23.30 MiB/s [2024-10-14T12:33:37.139Z] 5925.75 IOPS, 23.15 MiB/s [2024-10-14T12:33:38.081Z] 5940.40 IOPS, 23.20 MiB/s [2024-10-14T12:33:39.021Z] 5923.33 IOPS, 23.14 MiB/s [2024-10-14T12:33:39.962Z] 5935.00 IOPS, 23.18 MiB/s [2024-10-14T12:33:40.903Z] 5885.88 IOPS, 22.99 MiB/s [2024-10-14T12:33:42.288Z] 5926.89 IOPS, 23.15 MiB/s [2024-10-14T12:33:42.288Z] 5943.20 IOPS, 23.22 MiB/s 00:20:01.561 Latency(us) 00:20:01.561 [2024-10-14T12:33:42.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.561 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:01.561 Verification LBA range: start 0x0 length 0x2000 00:20:01.561 TLSTESTn1 : 10.02 5941.85 23.21 0.00 0.00 21503.21 4915.20 26542.08 00:20:01.561 [2024-10-14T12:33:42.288Z] =================================================================================================================== 00:20:01.561 [2024-10-14T12:33:42.288Z] Total : 5941.85 23.21 0.00 0.00 21503.21 4915.20 26542.08 00:20:01.561 { 00:20:01.561 "results": [ 00:20:01.561 { 00:20:01.561 "job": "TLSTESTn1", 00:20:01.561 "core_mask": "0x4", 00:20:01.561 "workload": "verify", 00:20:01.561 "status": "finished", 00:20:01.561 "verify_range": { 00:20:01.561 "start": 0, 00:20:01.561 "length": 8192 00:20:01.561 }, 00:20:01.561 "queue_depth": 128, 00:20:01.561 "io_size": 4096, 00:20:01.561 "runtime": 10.02382, 00:20:01.561 "iops": 5941.846521585583, 00:20:01.561 "mibps": 23.210337974943684, 00:20:01.561 "io_failed": 0, 00:20:01.561 "io_timeout": 0, 00:20:01.561 "avg_latency_us": 21503.205695097382, 00:20:01.561 "min_latency_us": 4915.2, 00:20:01.561 "max_latency_us": 26542.08 00:20:01.561 } 00:20:01.561 ], 00:20:01.561 "core_count": 1 00:20:01.561 } 00:20:01.561 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:01.561 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3412789 00:20:01.561 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3412789 ']' 00:20:01.561 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3412789 00:20:01.561 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:01.561 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:01.561 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3412789 00:20:01.561 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:01.561 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:01.561 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3412789' 00:20:01.561 killing process with pid 3412789 00:20:01.561 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3412789 00:20:01.561 Received shutdown signal, test time was about 10.000000 seconds 00:20:01.561 00:20:01.561 Latency(us) 00:20:01.561 [2024-10-14T12:33:42.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.561 [2024-10-14T12:33:42.288Z] =================================================================================================================== 00:20:01.561 [2024-10-14T12:33:42.288Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.561 14:33:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3412789 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3412757 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3412757 ']' 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3412757 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3412757 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3412757' 00:20:01.561 killing process with pid 3412757 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3412757 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3412757 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3415133 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3415133 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3415133 ']' 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.561 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.822 [2024-10-14 14:33:42.301407] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:20:01.822 [2024-10-14 14:33:42.301452] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.822 [2024-10-14 14:33:42.357477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.822 [2024-10-14 14:33:42.391867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.822 [2024-10-14 14:33:42.391901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.822 [2024-10-14 14:33:42.391908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.822 [2024-10-14 14:33:42.391915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.822 [2024-10-14 14:33:42.391921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.822 [2024-10-14 14:33:42.392505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.822 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.822 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:01.822 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:01.822 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:01.822 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.822 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.822 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.po0y6q2UhN 00:20:01.822 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.po0y6q2UhN 00:20:01.822 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:02.082 [2024-10-14 14:33:42.663297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.082 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:02.343 14:33:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:02.343 [2024-10-14 14:33:43.024190] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:02.343 [2024-10-14 14:33:43.024415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.343 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:02.603 malloc0 00:20:02.603 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:02.864 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.po0y6q2UhN 00:20:03.125 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:03.125 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3415448 00:20:03.125 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:03.125 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:03.125 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3415448 /var/tmp/bdevperf.sock 00:20:03.125 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3415448 ']' 00:20:03.125 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.125 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:03.125 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.125 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:03.125 14:33:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.125 [2024-10-14 14:33:43.847522] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:20:03.125 [2024-10-14 14:33:43.847583] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415448 ] 00:20:03.385 [2024-10-14 14:33:43.925003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.385 [2024-10-14 14:33:43.954899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.955 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:03.955 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:03.955 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.po0y6q2UhN 00:20:04.216 14:33:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:04.216 [2024-10-14 14:33:44.938619] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.477 nvme0n1 00:20:04.477 14:33:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:04.477 Running I/O for 1 seconds... 00:20:05.418 5905.00 IOPS, 23.07 MiB/s 00:20:05.418 Latency(us) 00:20:05.418 [2024-10-14T12:33:46.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.418 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:05.418 Verification LBA range: start 0x0 length 0x2000 00:20:05.418 nvme0n1 : 1.02 5934.01 23.18 0.00 0.00 21396.70 4587.52 30146.56 00:20:05.418 [2024-10-14T12:33:46.145Z] =================================================================================================================== 00:20:05.418 [2024-10-14T12:33:46.145Z] Total : 5934.01 23.18 0.00 0.00 21396.70 4587.52 30146.56 00:20:05.418 { 00:20:05.418 "results": [ 00:20:05.418 { 00:20:05.418 "job": "nvme0n1", 00:20:05.418 "core_mask": "0x2", 00:20:05.418 "workload": "verify", 00:20:05.418 "status": "finished", 00:20:05.418 "verify_range": { 00:20:05.418 "start": 0, 00:20:05.418 "length": 8192 00:20:05.418 }, 00:20:05.418 "queue_depth": 128, 00:20:05.418 "io_size": 4096, 00:20:05.418 "runtime": 1.016851, 00:20:05.418 "iops": 5934.006063818593, 00:20:05.418 "mibps": 23.17971118679138, 00:20:05.418 "io_failed": 0, 00:20:05.418 "io_timeout": 0, 00:20:05.418 "avg_latency_us": 21396.701182189816, 00:20:05.418 "min_latency_us": 4587.52, 00:20:05.418 "max_latency_us": 30146.56 00:20:05.418 } 00:20:05.418 ], 00:20:05.418 "core_count": 1 00:20:05.418 } 00:20:05.418 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3415448 00:20:05.418 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3415448 ']' 00:20:05.418 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3415448 00:20:05.418 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:05.418 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.418 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3415448 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3415448' 00:20:05.679 killing process with pid 3415448 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3415448 00:20:05.679 Received shutdown signal, test time was about 1.000000 seconds 00:20:05.679 00:20:05.679 Latency(us) 00:20:05.679 [2024-10-14T12:33:46.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.679 [2024-10-14T12:33:46.406Z] =================================================================================================================== 00:20:05.679 [2024-10-14T12:33:46.406Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3415448 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3415133 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3415133 ']' 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3415133 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3415133 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3415133' 00:20:05.679 killing process with pid 3415133 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3415133 00:20:05.679 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3415133 00:20:05.940 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:05.940 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:05.940 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:05.940 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.940 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3415853 00:20:05.940 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:05.940 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3415853 00:20:05.940 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3415853 ']' 00:20:05.940 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.940 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:05.940 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.940 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:05.940 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.940 [2024-10-14 14:33:46.547762] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:20:05.940 [2024-10-14 14:33:46.547815] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.940 [2024-10-14 14:33:46.615509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.940 [2024-10-14 14:33:46.649394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.940 [2024-10-14 14:33:46.649429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.940 [2024-10-14 14:33:46.649437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.940 [2024-10-14 14:33:46.649443] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.940 [2024-10-14 14:33:46.649449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.940 [2024-10-14 14:33:46.650008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.201 [2024-10-14 14:33:46.776520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.201 malloc0 00:20:06.201 [2024-10-14 14:33:46.803142] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:06.201 [2024-10-14 14:33:46.803355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3415961 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3415961 /var/tmp/bdevperf.sock 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3415961 ']' 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.201 14:33:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.201 [2024-10-14 14:33:46.883489] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:20:06.201 [2024-10-14 14:33:46.883534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3415961 ] 00:20:06.462 [2024-10-14 14:33:46.960198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.462 [2024-10-14 14:33:46.990119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.033 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:07.033 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:07.033 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.po0y6q2UhN 00:20:07.293 14:33:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:07.293 [2024-10-14 14:33:48.005885] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.553 nvme0n1 00:20:07.553 14:33:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:07.553 Running I/O for 1 seconds... 00:20:08.495 4063.00 IOPS, 15.87 MiB/s 00:20:08.495 Latency(us) 00:20:08.495 [2024-10-14T12:33:49.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.495 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:08.495 Verification LBA range: start 0x0 length 0x2000 00:20:08.495 nvme0n1 : 1.02 4111.23 16.06 0.00 0.00 30883.48 6799.36 51991.89 00:20:08.495 [2024-10-14T12:33:49.222Z] =================================================================================================================== 00:20:08.495 [2024-10-14T12:33:49.222Z] Total : 4111.23 16.06 0.00 0.00 30883.48 6799.36 51991.89 00:20:08.495 { 00:20:08.495 "results": [ 00:20:08.495 { 00:20:08.495 "job": "nvme0n1", 00:20:08.495 "core_mask": "0x2", 00:20:08.495 "workload": "verify", 00:20:08.495 "status": "finished", 00:20:08.495 "verify_range": { 00:20:08.495 "start": 0, 00:20:08.495 "length": 8192 00:20:08.495 }, 00:20:08.495 "queue_depth": 128, 00:20:08.495 "io_size": 4096, 00:20:08.495 "runtime": 1.019645, 00:20:08.495 "iops": 4111.234792501312, 00:20:08.495 "mibps": 16.05951090820825, 00:20:08.495 "io_failed": 0, 00:20:08.495 "io_timeout": 0, 00:20:08.495 "avg_latency_us": 30883.484987277352, 00:20:08.495 "min_latency_us": 6799.36, 00:20:08.495 "max_latency_us": 51991.89333333333 00:20:08.495 } 00:20:08.495 ], 00:20:08.495 "core_count": 1 00:20:08.495 } 00:20:08.757 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:08.757 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.757 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.757 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.757 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:08.757 "subsystems": [ 00:20:08.757 { 00:20:08.757 "subsystem": "keyring", 00:20:08.757 "config": [ 00:20:08.757 { 00:20:08.757 "method": "keyring_file_add_key", 00:20:08.757 "params": { 00:20:08.757 "name": "key0", 00:20:08.757 "path": "/tmp/tmp.po0y6q2UhN" 00:20:08.757 } 00:20:08.757 } 00:20:08.757 ] 00:20:08.757 }, 00:20:08.757 { 00:20:08.757 "subsystem": "iobuf", 00:20:08.757 "config": [ 00:20:08.757 { 00:20:08.757 "method": "iobuf_set_options", 00:20:08.757 "params": { 00:20:08.757 "small_pool_count": 8192, 00:20:08.757 "large_pool_count": 1024, 00:20:08.757 "small_bufsize": 8192, 00:20:08.757 "large_bufsize": 135168 00:20:08.757 } 00:20:08.757 } 00:20:08.757 ] 00:20:08.757 }, 00:20:08.757 { 00:20:08.757 "subsystem": "sock", 00:20:08.757 "config": [ 00:20:08.757 { 00:20:08.757 "method": "sock_set_default_impl", 00:20:08.757 "params": { 00:20:08.757 "impl_name": "posix" 00:20:08.757 } 00:20:08.757 }, 00:20:08.757 { 00:20:08.757 "method": "sock_impl_set_options", 00:20:08.757 "params": { 00:20:08.757 "impl_name": "ssl", 00:20:08.757 "recv_buf_size": 4096, 00:20:08.757 "send_buf_size": 4096, 00:20:08.757 "enable_recv_pipe": true, 00:20:08.757 "enable_quickack": false, 00:20:08.757 "enable_placement_id": 0, 00:20:08.757 "enable_zerocopy_send_server": true, 00:20:08.757 "enable_zerocopy_send_client": false, 00:20:08.757 "zerocopy_threshold": 0, 00:20:08.757 "tls_version": 0, 00:20:08.757 "enable_ktls": false 00:20:08.757 } 00:20:08.757 }, 00:20:08.757 { 00:20:08.757 "method": "sock_impl_set_options", 00:20:08.757 "params": { 00:20:08.757 "impl_name": "posix", 00:20:08.757 "recv_buf_size": 2097152, 00:20:08.757 "send_buf_size": 2097152, 00:20:08.757 "enable_recv_pipe": true, 00:20:08.757 "enable_quickack": false, 00:20:08.757 "enable_placement_id": 0, 00:20:08.757 "enable_zerocopy_send_server": true, 00:20:08.757 "enable_zerocopy_send_client": false, 00:20:08.757 "zerocopy_threshold": 0, 00:20:08.757 "tls_version": 0, 00:20:08.757 "enable_ktls": false 00:20:08.757 } 00:20:08.757 } 00:20:08.757 ] 00:20:08.757 }, 00:20:08.757 { 00:20:08.757 "subsystem": "vmd", 00:20:08.757 "config": [] 00:20:08.757 }, 00:20:08.757 { 00:20:08.757 "subsystem": "accel", 00:20:08.757 "config": [ 00:20:08.757 { 00:20:08.757 "method": "accel_set_options", 00:20:08.757 "params": { 00:20:08.757 "small_cache_size": 128, 00:20:08.757 "large_cache_size": 16, 00:20:08.757 "task_count": 2048, 00:20:08.757 "sequence_count": 2048, 00:20:08.757 "buf_count": 2048 00:20:08.757 } 00:20:08.757 } 00:20:08.757 ] 00:20:08.757 }, 00:20:08.757 { 00:20:08.757 "subsystem": "bdev", 00:20:08.757 "config": [ 00:20:08.757 { 00:20:08.757 "method": "bdev_set_options", 00:20:08.757 "params": { 00:20:08.757 "bdev_io_pool_size": 65535, 00:20:08.757 "bdev_io_cache_size": 256, 00:20:08.757 "bdev_auto_examine": true, 00:20:08.757 "iobuf_small_cache_size": 128, 00:20:08.757 "iobuf_large_cache_size": 16 00:20:08.757 } 00:20:08.757 }, 00:20:08.757 { 00:20:08.757 "method": "bdev_raid_set_options", 00:20:08.757 "params": { 00:20:08.757 "process_window_size_kb": 1024, 00:20:08.757 "process_max_bandwidth_mb_sec": 0 00:20:08.757 } 00:20:08.757 }, 00:20:08.757 { 00:20:08.757 "method": "bdev_iscsi_set_options", 00:20:08.757 "params": { 00:20:08.757 "timeout_sec": 30 00:20:08.757 } 00:20:08.757 }, 00:20:08.757 { 00:20:08.757 "method": "bdev_nvme_set_options", 00:20:08.757 "params": { 00:20:08.757 "action_on_timeout": "none", 00:20:08.757 "timeout_us": 0, 00:20:08.757 "timeout_admin_us": 0, 00:20:08.757 "keep_alive_timeout_ms": 10000, 00:20:08.757 "arbitration_burst": 0, 00:20:08.757 "low_priority_weight": 0, 00:20:08.757 "medium_priority_weight": 0, 00:20:08.757 "high_priority_weight": 0, 00:20:08.757 "nvme_adminq_poll_period_us": 10000, 00:20:08.757 "nvme_ioq_poll_period_us": 0, 00:20:08.757 "io_queue_requests": 0, 00:20:08.757 "delay_cmd_submit": true, 00:20:08.757 "transport_retry_count": 4, 00:20:08.757 "bdev_retry_count": 3, 00:20:08.757 "transport_ack_timeout": 0, 00:20:08.758 "ctrlr_loss_timeout_sec": 0, 00:20:08.758 "reconnect_delay_sec": 0, 00:20:08.758 "fast_io_fail_timeout_sec": 0, 00:20:08.758 "disable_auto_failback": false, 00:20:08.758 "generate_uuids": false, 00:20:08.758 "transport_tos": 0, 00:20:08.758 "nvme_error_stat": false, 00:20:08.758 "rdma_srq_size": 0, 00:20:08.758 "io_path_stat": false, 00:20:08.758 "allow_accel_sequence": false, 00:20:08.758 "rdma_max_cq_size": 0, 00:20:08.758 "rdma_cm_event_timeout_ms": 0, 00:20:08.758 "dhchap_digests": [ 00:20:08.758 "sha256", 00:20:08.758 "sha384", 00:20:08.758 "sha512" 00:20:08.758 ], 00:20:08.758 "dhchap_dhgroups": [ 00:20:08.758 "null", 00:20:08.758 "ffdhe2048", 00:20:08.758 "ffdhe3072", 00:20:08.758 "ffdhe4096", 00:20:08.758 "ffdhe6144", 00:20:08.758 "ffdhe8192" 00:20:08.758 ] 00:20:08.758 } 00:20:08.758 }, 00:20:08.758 { 00:20:08.758 "method": "bdev_nvme_set_hotplug", 00:20:08.758 "params": { 00:20:08.758 "period_us": 100000, 00:20:08.758 "enable": false 00:20:08.758 } 00:20:08.758 }, 00:20:08.758 { 00:20:08.758 "method": "bdev_malloc_create", 00:20:08.758 "params": { 00:20:08.758 "name": "malloc0", 00:20:08.758 "num_blocks": 8192, 00:20:08.758 "block_size": 4096, 00:20:08.758 "physical_block_size": 4096, 00:20:08.758 "uuid": "e8eb29fd-cf61-433a-b057-8f07706f7f5d", 00:20:08.758 "optimal_io_boundary": 0, 00:20:08.758 "md_size": 0, 00:20:08.758 "dif_type": 0, 00:20:08.758 "dif_is_head_of_md": false, 00:20:08.758 "dif_pi_format": 0 00:20:08.758 } 00:20:08.758 }, 00:20:08.758 { 00:20:08.758 "method": "bdev_wait_for_examine" 00:20:08.758 } 00:20:08.758 ] 00:20:08.758 }, 00:20:08.758 { 00:20:08.758 "subsystem": "nbd", 00:20:08.758 "config": [] 00:20:08.758 }, 00:20:08.758 { 00:20:08.758 "subsystem": "scheduler", 00:20:08.758 "config": [ 00:20:08.758 { 00:20:08.758 "method": "framework_set_scheduler", 00:20:08.758 "params": { 00:20:08.758 "name": "static" 00:20:08.758 } 00:20:08.758 } 00:20:08.758 ] 00:20:08.758 }, 00:20:08.758 { 00:20:08.758 "subsystem": "nvmf", 00:20:08.758 "config": [ 00:20:08.758 { 00:20:08.758 "method": "nvmf_set_config", 00:20:08.758 "params": { 00:20:08.758 "discovery_filter": "match_any", 00:20:08.758 "admin_cmd_passthru": { 00:20:08.758 "identify_ctrlr": false 00:20:08.758 }, 00:20:08.758 "dhchap_digests": [ 00:20:08.758 "sha256", 00:20:08.758 "sha384", 00:20:08.758 "sha512" 00:20:08.758 ], 00:20:08.758 "dhchap_dhgroups": [ 00:20:08.758 "null", 00:20:08.758 "ffdhe2048", 00:20:08.758 "ffdhe3072", 00:20:08.758 "ffdhe4096", 00:20:08.758 "ffdhe6144", 00:20:08.758 "ffdhe8192" 00:20:08.758 ] 00:20:08.758 } 00:20:08.758 }, 00:20:08.758 { 00:20:08.758 "method": "nvmf_set_max_subsystems", 00:20:08.758 "params": { 00:20:08.758 "max_subsystems": 1024 00:20:08.758 } 00:20:08.758 }, 00:20:08.758 { 00:20:08.758 "method": "nvmf_set_crdt", 00:20:08.758 "params": { 00:20:08.758 "crdt1": 0, 00:20:08.758 "crdt2": 0, 00:20:08.758 "crdt3": 0 00:20:08.758 } 00:20:08.758 }, 00:20:08.758 { 00:20:08.758 "method": "nvmf_create_transport", 00:20:08.758 "params": { 00:20:08.758 "trtype": "TCP", 00:20:08.758 "max_queue_depth": 128, 00:20:08.758 "max_io_qpairs_per_ctrlr": 127, 00:20:08.758 "in_capsule_data_size": 4096, 00:20:08.758 "max_io_size": 131072, 00:20:08.758 "io_unit_size": 131072, 00:20:08.758 "max_aq_depth": 128, 00:20:08.758 "num_shared_buffers": 511, 00:20:08.758 "buf_cache_size": 4294967295, 00:20:08.758 "dif_insert_or_strip": false, 00:20:08.758 "zcopy": false, 00:20:08.758 "c2h_success": false, 00:20:08.758 "sock_priority": 0, 00:20:08.758 "abort_timeout_sec": 1, 00:20:08.758 "ack_timeout": 0, 00:20:08.758 "data_wr_pool_size": 0 00:20:08.758 } 00:20:08.758 }, 00:20:08.758 { 00:20:08.758 "method": "nvmf_create_subsystem", 00:20:08.758 "params": { 00:20:08.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.758 "allow_any_host": false, 00:20:08.758 "serial_number": "00000000000000000000", 00:20:08.758 "model_number": "SPDK bdev Controller", 00:20:08.758 "max_namespaces": 32, 00:20:08.758 "min_cntlid": 1, 00:20:08.758 "max_cntlid": 65519, 00:20:08.758 "ana_reporting": false 00:20:08.758 } 00:20:08.758 }, 00:20:08.758 { 00:20:08.758 "method": "nvmf_subsystem_add_host", 00:20:08.758 "params": { 00:20:08.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.758 "host": "nqn.2016-06.io.spdk:host1", 00:20:08.758 "psk": "key0" 00:20:08.758 } 00:20:08.758 }, 00:20:08.758 { 00:20:08.758 "method": "nvmf_subsystem_add_ns", 00:20:08.758 "params": { 00:20:08.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.758 "namespace": { 00:20:08.758 "nsid": 1, 00:20:08.758 "bdev_name": "malloc0", 00:20:08.758 "nguid": "E8EB29FDCF61433AB0578F07706F7F5D", 00:20:08.758 "uuid": "e8eb29fd-cf61-433a-b057-8f07706f7f5d", 00:20:08.758 "no_auto_visible": false 00:20:08.758 } 00:20:08.758 } 00:20:08.758 }, 00:20:08.758 { 00:20:08.758 "method": "nvmf_subsystem_add_listener", 00:20:08.758 "params": { 00:20:08.758 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.758 "listen_address": { 00:20:08.758 "trtype": "TCP", 00:20:08.758 "adrfam": "IPv4", 00:20:08.758 "traddr": "10.0.0.2", 00:20:08.758 "trsvcid": "4420" 00:20:08.758 }, 00:20:08.758 "secure_channel": false, 00:20:08.758 "sock_impl": "ssl" 00:20:08.758 } 00:20:08.758 } 00:20:08.758 ] 00:20:08.758 } 00:20:08.758 ] 00:20:08.758 }' 00:20:08.758 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:09.019 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:09.019 "subsystems": [ 00:20:09.019 { 00:20:09.019 "subsystem": "keyring", 00:20:09.019 "config": [ 00:20:09.019 { 00:20:09.019 "method": "keyring_file_add_key", 00:20:09.019 "params": { 00:20:09.019 "name": "key0", 00:20:09.019 "path": "/tmp/tmp.po0y6q2UhN" 00:20:09.019 } 00:20:09.019 } 00:20:09.019 ] 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "subsystem": "iobuf", 00:20:09.020 "config": [ 00:20:09.020 { 00:20:09.020 "method": "iobuf_set_options", 00:20:09.020 "params": { 00:20:09.020 "small_pool_count": 8192, 00:20:09.020 "large_pool_count": 1024, 00:20:09.020 "small_bufsize": 8192, 00:20:09.020 "large_bufsize": 135168 00:20:09.020 } 00:20:09.020 } 00:20:09.020 ] 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "subsystem": "sock", 00:20:09.020 "config": [ 00:20:09.020 { 00:20:09.020 "method": "sock_set_default_impl", 00:20:09.020 "params": { 00:20:09.020 "impl_name": "posix" 00:20:09.020 } 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "method": "sock_impl_set_options", 00:20:09.020 "params": { 00:20:09.020 "impl_name": "ssl", 00:20:09.020 "recv_buf_size": 4096, 00:20:09.020 "send_buf_size": 4096, 00:20:09.020 "enable_recv_pipe": true, 00:20:09.020 "enable_quickack": false, 00:20:09.020 "enable_placement_id": 0, 00:20:09.020 "enable_zerocopy_send_server": true, 00:20:09.020 "enable_zerocopy_send_client": false, 00:20:09.020 "zerocopy_threshold": 0, 00:20:09.020 "tls_version": 0, 00:20:09.020 "enable_ktls": false 00:20:09.020 } 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "method": "sock_impl_set_options", 00:20:09.020 "params": { 00:20:09.020 "impl_name": "posix", 00:20:09.020 "recv_buf_size": 2097152, 00:20:09.020 "send_buf_size": 2097152, 00:20:09.020 "enable_recv_pipe": true, 00:20:09.020 "enable_quickack": false, 00:20:09.020 "enable_placement_id": 0, 00:20:09.020 "enable_zerocopy_send_server": true, 00:20:09.020 "enable_zerocopy_send_client": false, 00:20:09.020 "zerocopy_threshold": 0, 00:20:09.020 "tls_version": 0, 00:20:09.020 "enable_ktls": false 00:20:09.020 } 00:20:09.020 } 00:20:09.020 ] 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "subsystem": "vmd", 00:20:09.020 "config": [] 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "subsystem": "accel", 00:20:09.020 "config": [ 00:20:09.020 { 00:20:09.020 "method": "accel_set_options", 00:20:09.020 "params": { 00:20:09.020 "small_cache_size": 128, 00:20:09.020 "large_cache_size": 16, 00:20:09.020 "task_count": 2048, 00:20:09.020 "sequence_count": 2048, 00:20:09.020 "buf_count": 2048 00:20:09.020 } 00:20:09.020 } 00:20:09.020 ] 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "subsystem": "bdev", 00:20:09.020 "config": [ 00:20:09.020 { 00:20:09.020 "method": "bdev_set_options", 00:20:09.020 "params": { 00:20:09.020 "bdev_io_pool_size": 65535, 00:20:09.020 "bdev_io_cache_size": 256, 00:20:09.020 "bdev_auto_examine": true, 00:20:09.020 "iobuf_small_cache_size": 128, 00:20:09.020 "iobuf_large_cache_size": 16 00:20:09.020 } 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "method": "bdev_raid_set_options", 00:20:09.020 "params": { 00:20:09.020 "process_window_size_kb": 1024, 00:20:09.020 "process_max_bandwidth_mb_sec": 0 00:20:09.020 } 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "method": "bdev_iscsi_set_options", 00:20:09.020 "params": { 00:20:09.020 "timeout_sec": 30 00:20:09.020 } 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "method": "bdev_nvme_set_options", 00:20:09.020 "params": { 00:20:09.020 "action_on_timeout": "none", 00:20:09.020 "timeout_us": 0, 00:20:09.020 "timeout_admin_us": 0, 00:20:09.020 "keep_alive_timeout_ms": 10000, 00:20:09.020 "arbitration_burst": 0, 00:20:09.020 "low_priority_weight": 0, 00:20:09.020 "medium_priority_weight": 0, 00:20:09.020 "high_priority_weight": 0, 00:20:09.020 "nvme_adminq_poll_period_us": 10000, 00:20:09.020 "nvme_ioq_poll_period_us": 0, 00:20:09.020 "io_queue_requests": 512, 00:20:09.020 "delay_cmd_submit": true, 00:20:09.020 "transport_retry_count": 4, 00:20:09.020 "bdev_retry_count": 3, 00:20:09.020 "transport_ack_timeout": 0, 00:20:09.020 "ctrlr_loss_timeout_sec": 0, 00:20:09.020 "reconnect_delay_sec": 0, 00:20:09.020 "fast_io_fail_timeout_sec": 0, 00:20:09.020 "disable_auto_failback": false, 00:20:09.020 "generate_uuids": false, 00:20:09.020 "transport_tos": 0, 00:20:09.020 "nvme_error_stat": false, 00:20:09.020 "rdma_srq_size": 0, 00:20:09.020 "io_path_stat": false, 00:20:09.020 "allow_accel_sequence": false, 00:20:09.020 "rdma_max_cq_size": 0, 00:20:09.020 "rdma_cm_event_timeout_ms": 0, 00:20:09.020 "dhchap_digests": [ 00:20:09.020 "sha256", 00:20:09.020 "sha384", 00:20:09.020 "sha512" 00:20:09.020 ], 00:20:09.020 "dhchap_dhgroups": [ 00:20:09.020 "null", 00:20:09.020 "ffdhe2048", 00:20:09.020 "ffdhe3072", 00:20:09.020 "ffdhe4096", 00:20:09.020 "ffdhe6144", 00:20:09.020 "ffdhe8192" 00:20:09.020 ] 00:20:09.020 } 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "method": "bdev_nvme_attach_controller", 00:20:09.020 "params": { 00:20:09.020 "name": "nvme0", 00:20:09.020 "trtype": "TCP", 00:20:09.020 "adrfam": "IPv4", 00:20:09.020 "traddr": "10.0.0.2", 00:20:09.020 "trsvcid": "4420", 00:20:09.020 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.020 "prchk_reftag": false, 00:20:09.020 "prchk_guard": false, 00:20:09.020 "ctrlr_loss_timeout_sec": 0, 00:20:09.020 "reconnect_delay_sec": 0, 00:20:09.020 "fast_io_fail_timeout_sec": 0, 00:20:09.020 "psk": "key0", 00:20:09.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:09.020 "hdgst": false, 00:20:09.020 "ddgst": false, 00:20:09.020 "multipath": "multipath" 00:20:09.020 } 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "method": "bdev_nvme_set_hotplug", 00:20:09.020 "params": { 00:20:09.020 "period_us": 100000, 00:20:09.020 "enable": false 00:20:09.020 } 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "method": "bdev_enable_histogram", 00:20:09.020 "params": { 00:20:09.020 "name": "nvme0n1", 00:20:09.020 "enable": true 00:20:09.020 } 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "method": "bdev_wait_for_examine" 00:20:09.020 } 00:20:09.020 ] 00:20:09.020 }, 00:20:09.020 { 00:20:09.020 "subsystem": "nbd", 00:20:09.020 "config": [] 00:20:09.020 } 00:20:09.020 ] 00:20:09.020 }' 00:20:09.020 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3415961 00:20:09.020 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3415961 ']' 00:20:09.020 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3415961 00:20:09.020 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:09.020 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:09.020 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3415961 00:20:09.020 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:09.020 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:09.020 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3415961' 00:20:09.020 killing process with pid 3415961 00:20:09.020 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3415961 00:20:09.020 Received shutdown signal, test time was about 1.000000 seconds 00:20:09.020 00:20:09.020 Latency(us) 00:20:09.020 [2024-10-14T12:33:49.747Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.020 [2024-10-14T12:33:49.747Z] =================================================================================================================== 00:20:09.020 [2024-10-14T12:33:49.747Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.020 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3415961 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3415853 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3415853 ']' 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3415853 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3415853 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3415853' 00:20:09.282 killing process with pid 3415853 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3415853 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3415853 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:09.282 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:09.282 "subsystems": [ 00:20:09.282 { 00:20:09.282 "subsystem": "keyring", 00:20:09.282 "config": [ 00:20:09.282 { 00:20:09.282 "method": "keyring_file_add_key", 00:20:09.282 "params": { 00:20:09.282 "name": "key0", 00:20:09.282 "path": "/tmp/tmp.po0y6q2UhN" 00:20:09.282 } 00:20:09.282 } 00:20:09.282 ] 00:20:09.282 }, 00:20:09.282 { 00:20:09.282 "subsystem": "iobuf", 00:20:09.282 "config": [ 00:20:09.282 { 00:20:09.282 "method": "iobuf_set_options", 00:20:09.282 "params": { 00:20:09.282 "small_pool_count": 8192, 00:20:09.282 "large_pool_count": 1024, 00:20:09.282 "small_bufsize": 8192, 00:20:09.282 "large_bufsize": 135168 00:20:09.282 } 00:20:09.282 } 00:20:09.282 ] 00:20:09.282 }, 00:20:09.282 { 00:20:09.282 "subsystem": "sock", 00:20:09.282 "config": [ 00:20:09.282 { 00:20:09.282 "method": "sock_set_default_impl", 00:20:09.282 "params": { 00:20:09.282 "impl_name": "posix" 00:20:09.282 } 00:20:09.282 }, 00:20:09.282 { 00:20:09.282 "method": "sock_impl_set_options", 00:20:09.282 "params": { 00:20:09.282 "impl_name": "ssl", 00:20:09.282 "recv_buf_size": 4096, 00:20:09.282 "send_buf_size": 4096, 00:20:09.282 "enable_recv_pipe": true, 00:20:09.282 "enable_quickack": false, 00:20:09.282 "enable_placement_id": 0, 00:20:09.282 "enable_zerocopy_send_server": true, 00:20:09.282 "enable_zerocopy_send_client": false, 00:20:09.282 "zerocopy_threshold": 0, 00:20:09.282 "tls_version": 0, 00:20:09.282 "enable_ktls": false 00:20:09.282 } 00:20:09.282 }, 00:20:09.282 { 00:20:09.282 "method": "sock_impl_set_options", 00:20:09.282 "params": { 00:20:09.282 "impl_name": "posix", 00:20:09.282 "recv_buf_size": 2097152, 00:20:09.282 "send_buf_size": 2097152, 00:20:09.282 "enable_recv_pipe": true, 00:20:09.282 "enable_quickack": false, 00:20:09.282 "enable_placement_id": 0, 00:20:09.282 "enable_zerocopy_send_server": true, 00:20:09.282 "enable_zerocopy_send_client": false, 00:20:09.282 "zerocopy_threshold": 0, 00:20:09.282 "tls_version": 0, 00:20:09.282 "enable_ktls": false 00:20:09.282 } 00:20:09.282 } 00:20:09.282 ] 00:20:09.282 }, 00:20:09.282 { 00:20:09.282 "subsystem": "vmd", 00:20:09.282 "config": [] 00:20:09.282 }, 00:20:09.282 { 00:20:09.282 "subsystem": "accel", 00:20:09.282 "config": [ 00:20:09.282 { 00:20:09.282 "method": "accel_set_options", 00:20:09.282 "params": { 00:20:09.282 "small_cache_size": 128, 00:20:09.282 "large_cache_size": 16, 00:20:09.282 "task_count": 2048, 00:20:09.282 "sequence_count": 2048, 00:20:09.282 "buf_count": 2048 00:20:09.282 } 00:20:09.282 } 00:20:09.282 ] 00:20:09.282 }, 00:20:09.282 { 00:20:09.282 "subsystem": "bdev", 00:20:09.282 "config": [ 00:20:09.282 { 00:20:09.282 "method": "bdev_set_options", 00:20:09.282 "params": { 00:20:09.282 "bdev_io_pool_size": 65535, 00:20:09.282 "bdev_io_cache_size": 256, 00:20:09.282 "bdev_auto_examine": true, 00:20:09.282 "iobuf_small_cache_size": 128, 00:20:09.282 "iobuf_large_cache_size": 16 00:20:09.282 } 00:20:09.282 }, 00:20:09.282 { 00:20:09.282 "method": "bdev_raid_set_options", 00:20:09.282 "params": { 00:20:09.282 "process_window_size_kb": 1024, 00:20:09.282 "process_max_bandwidth_mb_sec": 0 00:20:09.282 } 00:20:09.282 }, 00:20:09.282 { 00:20:09.282 "method": "bdev_iscsi_set_options", 00:20:09.282 "params": { 00:20:09.282 "timeout_sec": 30 00:20:09.282 } 00:20:09.282 }, 00:20:09.282 { 00:20:09.282 "method": "bdev_nvme_set_options", 00:20:09.282 "params": { 00:20:09.282 "action_on_timeout": "none", 00:20:09.282 "timeout_us": 0, 00:20:09.282 "timeout_admin_us": 0, 00:20:09.282 "keep_alive_timeout_ms": 10000, 00:20:09.282 "arbitration_burst": 0, 00:20:09.282 "low_priority_weight": 0, 00:20:09.282 "medium_priority_weight": 0, 00:20:09.282 "high_priority_weight": 0, 00:20:09.282 "nvme_adminq_poll_period_us": 10000, 00:20:09.282 "nvme_ioq_poll_period_us": 0, 00:20:09.282 "io_queue_requests": 0, 00:20:09.282 "delay_cmd_submit": true, 00:20:09.282 "transport_retry_count": 4, 00:20:09.282 "bdev_retry_count": 3, 00:20:09.282 "transport_ack_timeout": 0, 00:20:09.282 "ctrlr_loss_timeout_sec": 0, 00:20:09.282 "reconnect_delay_sec": 0, 00:20:09.282 "fast_io_fail_timeout_sec": 0, 00:20:09.282 "disable_auto_failback": false, 00:20:09.282 "generate_uuids": false, 00:20:09.282 "transport_tos": 0, 00:20:09.282 "nvme_error_stat": false, 00:20:09.282 "rdma_srq_size": 0, 00:20:09.282 "io_path_stat": false, 00:20:09.282 "allow_accel_sequence": false, 00:20:09.282 "rdma_max_cq_size": 0, 00:20:09.282 "rdma_cm_event_timeout_ms": 0, 00:20:09.282 "dhchap_digests": [ 00:20:09.282 "sha256", 00:20:09.282 "sha384", 00:20:09.282 "sha512" 00:20:09.282 ], 00:20:09.282 "dhchap_dhgroups": [ 00:20:09.282 "null", 00:20:09.282 "ffdhe2048", 00:20:09.282 "ffdhe3072", 00:20:09.282 "ffdhe4096", 00:20:09.283 "ffdhe6144", 00:20:09.283 "ffdhe8192" 00:20:09.283 ] 00:20:09.283 } 00:20:09.283 }, 00:20:09.283 { 00:20:09.283 "method": "bdev_nvme_set_hotplug", 00:20:09.283 "params": { 00:20:09.283 "period_us": 100000, 00:20:09.283 "enable": false 00:20:09.283 } 00:20:09.283 }, 00:20:09.283 { 00:20:09.283 "method": "bdev_malloc_create", 00:20:09.283 "params": { 00:20:09.283 "name": "malloc0", 00:20:09.283 "num_blocks": 8192, 00:20:09.283 "block_size": 4096, 00:20:09.283 "physical_block_size": 4096, 00:20:09.283 "uuid": "e8eb29fd-cf61-433a-b057-8f07706f7f5d", 00:20:09.283 "optimal_io_boundary": 0, 00:20:09.283 "md_size": 0, 00:20:09.283 "dif_type": 0, 00:20:09.283 "dif_is_head_of_md": false, 00:20:09.283 "dif_pi_format": 0 00:20:09.283 } 00:20:09.283 }, 00:20:09.283 { 00:20:09.283 "method": "bdev_wait_for_examine" 00:20:09.283 } 00:20:09.283 ] 00:20:09.283 }, 00:20:09.283 { 00:20:09.283 "subsystem": "nbd", 00:20:09.283 "config": [] 00:20:09.283 }, 00:20:09.283 { 00:20:09.283 "subsystem": "scheduler", 00:20:09.283 "config": [ 00:20:09.283 { 00:20:09.283 "method": "framework_set_scheduler", 00:20:09.283 "params": { 00:20:09.283 "name": "static" 00:20:09.283 } 00:20:09.283 } 00:20:09.283 ] 00:20:09.283 }, 00:20:09.283 { 00:20:09.283 "subsystem": "nvmf", 00:20:09.283 "config": [ 00:20:09.283 { 00:20:09.283 "method": "nvmf_set_config", 00:20:09.283 "params": { 00:20:09.283 "discovery_filter": "match_any", 00:20:09.283 "admin_cmd_passthru": { 00:20:09.283 "identify_ctrlr": false 00:20:09.283 }, 00:20:09.283 "dhchap_digests": [ 00:20:09.283 "sha256", 00:20:09.283 "sha384", 00:20:09.283 "sha512" 00:20:09.283 ], 00:20:09.283 "dhchap_dhgroups": [ 00:20:09.283 "null", 00:20:09.283 "ffdhe2048", 00:20:09.283 "ffdhe3072", 00:20:09.283 "ffdhe4096", 00:20:09.283 "ffdhe6144", 00:20:09.283 "ffdhe8192" 00:20:09.283 ] 00:20:09.283 } 00:20:09.283 }, 00:20:09.283 { 00:20:09.283 "method": "nvmf_set_max_subsystems", 00:20:09.283 "params": { 00:20:09.283 "max_subsystems": 1024 00:20:09.283 } 00:20:09.283 }, 00:20:09.283 { 00:20:09.283 "method": "nvmf_set_crdt", 00:20:09.283 "params": { 00:20:09.283 "crdt1": 0, 00:20:09.283 "crdt2": 0, 00:20:09.283 "crdt3": 0 00:20:09.283 } 00:20:09.283 }, 00:20:09.283 { 00:20:09.283 "method": "nvmf_create_transport", 00:20:09.283 "params": { 00:20:09.283 "trtype": "TCP", 00:20:09.283 "max_queue_depth": 128, 00:20:09.283 "max_io_qpairs_per_ctrlr": 127, 00:20:09.283 "in_capsule_data_size": 4096, 00:20:09.283 "max_io_size": 131072, 00:20:09.283 "io_unit_size": 131072, 00:20:09.283 "max_aq_depth": 128, 00:20:09.283 "num_shared_buffers": 511, 00:20:09.283 "buf_cache_size": 4294967295, 00:20:09.283 "dif_insert_or_strip": false, 00:20:09.283 "zcopy": false, 00:20:09.283 "c2h_success": false, 00:20:09.283 "sock_priority": 0, 00:20:09.283 "abort_timeout_sec": 1, 00:20:09.283 "ack_timeout": 0, 00:20:09.283 "data_wr_pool_size": 0 00:20:09.283 } 00:20:09.283 }, 00:20:09.283 { 00:20:09.283 "method": "nvmf_create_subsystem", 00:20:09.283 "params": { 00:20:09.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.283 "allow_any_host": false, 00:20:09.283 "serial_number": "00000000000000000000", 00:20:09.283 "model_number": "SPDK bdev Controller", 00:20:09.283 "max_namespaces": 32, 00:20:09.283 "min_cntlid": 1, 00:20:09.283 "max_cntlid": 65519, 00:20:09.283 "ana_reporting": false 00:20:09.283 } 00:20:09.283 }, 00:20:09.283 { 00:20:09.283 "method": "nvmf_subsystem_add_host", 00:20:09.283 "params": { 00:20:09.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.283 "host": "nqn.2016-06.io.spdk:host1", 00:20:09.283 "psk": "key0" 00:20:09.283 } 00:20:09.283 }, 00:20:09.283 { 00:20:09.283 "method": "nvmf_subsystem_add_ns", 00:20:09.283 "params": { 00:20:09.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.283 "namespace": { 00:20:09.283 "nsid": 1, 00:20:09.283 "bdev_name": "malloc0", 00:20:09.283 "nguid": "E8EB29FDCF61433AB0578F07706F7F5D", 00:20:09.283 "uuid": "e8eb29fd-cf61-433a-b057-8f07706f7f5d", 00:20:09.283 "no_auto_visible": false 00:20:09.283 } 00:20:09.283 } 00:20:09.283 }, 00:20:09.283 { 00:20:09.283 "method": "nvmf_subsystem_add_listener", 00:20:09.283 "params": { 00:20:09.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:09.283 "listen_address": { 00:20:09.283 "trtype": "TCP", 00:20:09.283 "adrfam": "IPv4", 00:20:09.283 "traddr": "10.0.0.2", 00:20:09.283 "trsvcid": "4420" 00:20:09.283 }, 00:20:09.283 "secure_channel": false, 00:20:09.283 "sock_impl": "ssl" 00:20:09.283 } 00:20:09.283 } 00:20:09.283 ] 00:20:09.283 } 00:20:09.283 ] 00:20:09.283 }' 00:20:09.283 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.283 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=3416567 00:20:09.283 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 3416567 00:20:09.283 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:09.283 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3416567 ']' 00:20:09.283 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.283 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:09.283 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.283 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:09.283 14:33:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.544 [2024-10-14 14:33:50.018205] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:20:09.544 [2024-10-14 14:33:50.018260] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.544 [2024-10-14 14:33:50.085238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.544 [2024-10-14 14:33:50.120350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.544 [2024-10-14 14:33:50.120385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.544 [2024-10-14 14:33:50.120393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.544 [2024-10-14 14:33:50.120400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.544 [2024-10-14 14:33:50.120405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.544 [2024-10-14 14:33:50.121016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.805 [2024-10-14 14:33:50.319492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.805 [2024-10-14 14:33:50.351506] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.805 [2024-10-14 14:33:50.351731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3416911 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3416911 /var/tmp/bdevperf.sock 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3416911 ']' 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.378 14:33:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:10.378 "subsystems": [ 00:20:10.378 { 00:20:10.378 "subsystem": "keyring", 00:20:10.378 "config": [ 00:20:10.378 { 00:20:10.378 "method": "keyring_file_add_key", 00:20:10.378 "params": { 00:20:10.378 "name": "key0", 00:20:10.378 "path": "/tmp/tmp.po0y6q2UhN" 00:20:10.378 } 00:20:10.378 } 00:20:10.378 ] 00:20:10.378 }, 00:20:10.378 { 00:20:10.378 "subsystem": "iobuf", 00:20:10.378 "config": [ 00:20:10.378 { 00:20:10.378 "method": "iobuf_set_options", 00:20:10.378 "params": { 00:20:10.378 "small_pool_count": 8192, 00:20:10.378 "large_pool_count": 1024, 00:20:10.378 "small_bufsize": 8192, 00:20:10.378 "large_bufsize": 135168 00:20:10.378 } 00:20:10.378 } 00:20:10.378 ] 00:20:10.378 }, 00:20:10.378 { 00:20:10.378 "subsystem": "sock", 00:20:10.378 "config": [ 00:20:10.378 { 00:20:10.378 "method": "sock_set_default_impl", 00:20:10.378 "params": { 00:20:10.378 "impl_name": "posix" 00:20:10.378 } 00:20:10.378 }, 00:20:10.378 { 00:20:10.378 "method": "sock_impl_set_options", 00:20:10.378 "params": { 00:20:10.378 "impl_name": "ssl", 00:20:10.378 "recv_buf_size": 4096, 00:20:10.378 "send_buf_size": 4096, 00:20:10.378 "enable_recv_pipe": true, 00:20:10.378 "enable_quickack": false, 00:20:10.378 "enable_placement_id": 0, 00:20:10.378 "enable_zerocopy_send_server": true, 00:20:10.378 "enable_zerocopy_send_client": false, 00:20:10.378 "zerocopy_threshold": 0, 00:20:10.378 "tls_version": 0, 00:20:10.378 "enable_ktls": false 00:20:10.378 } 00:20:10.378 }, 00:20:10.378 { 00:20:10.378 "method": "sock_impl_set_options", 00:20:10.378 "params": { 00:20:10.378 "impl_name": "posix", 00:20:10.378 "recv_buf_size": 2097152, 00:20:10.378 "send_buf_size": 2097152, 00:20:10.378 "enable_recv_pipe": true, 00:20:10.378 "enable_quickack": false, 00:20:10.378 "enable_placement_id": 0, 00:20:10.378 "enable_zerocopy_send_server": true, 00:20:10.378 "enable_zerocopy_send_client": false, 00:20:10.378 "zerocopy_threshold": 0, 00:20:10.378 "tls_version": 0, 00:20:10.378 "enable_ktls": false 00:20:10.378 } 00:20:10.378 } 00:20:10.378 ] 00:20:10.378 }, 00:20:10.378 { 00:20:10.378 "subsystem": "vmd", 00:20:10.378 "config": [] 00:20:10.378 }, 00:20:10.378 { 00:20:10.378 "subsystem": "accel", 00:20:10.378 "config": [ 00:20:10.378 { 00:20:10.378 "method": "accel_set_options", 00:20:10.378 "params": { 00:20:10.378 "small_cache_size": 128, 00:20:10.378 "large_cache_size": 16, 00:20:10.378 "task_count": 2048, 00:20:10.378 "sequence_count": 2048, 00:20:10.378 "buf_count": 2048 00:20:10.378 } 00:20:10.378 } 00:20:10.378 ] 00:20:10.378 }, 00:20:10.378 { 00:20:10.378 "subsystem": "bdev", 00:20:10.378 "config": [ 00:20:10.378 { 00:20:10.378 "method": "bdev_set_options", 00:20:10.378 "params": { 00:20:10.378 "bdev_io_pool_size": 65535, 00:20:10.378 "bdev_io_cache_size": 256, 00:20:10.378 "bdev_auto_examine": true, 00:20:10.378 "iobuf_small_cache_size": 128, 00:20:10.378 "iobuf_large_cache_size": 16 00:20:10.378 } 00:20:10.378 }, 00:20:10.378 { 00:20:10.378 "method": "bdev_raid_set_options", 00:20:10.378 "params": { 00:20:10.378 "process_window_size_kb": 1024, 00:20:10.378 "process_max_bandwidth_mb_sec": 0 00:20:10.378 } 00:20:10.378 }, 00:20:10.378 { 00:20:10.378 "method": "bdev_iscsi_set_options", 00:20:10.378 "params": { 00:20:10.378 "timeout_sec": 30 00:20:10.378 } 00:20:10.378 }, 00:20:10.378 { 00:20:10.378 "method": "bdev_nvme_set_options", 00:20:10.378 "params": { 00:20:10.378 "action_on_timeout": "none", 00:20:10.378 "timeout_us": 0, 00:20:10.379 "timeout_admin_us": 0, 00:20:10.379 "keep_alive_timeout_ms": 10000, 00:20:10.379 "arbitration_burst": 0, 00:20:10.379 "low_priority_weight": 0, 00:20:10.379 "medium_priority_weight": 0, 00:20:10.379 "high_priority_weight": 0, 00:20:10.379 "nvme_adminq_poll_period_us": 10000, 00:20:10.379 "nvme_ioq_poll_period_us": 0, 00:20:10.379 "io_queue_requests": 512, 00:20:10.379 "delay_cmd_submit": true, 00:20:10.379 "transport_retry_count": 4, 00:20:10.379 "bdev_retry_count": 3, 00:20:10.379 "transport_ack_timeout": 0, 00:20:10.379 "ctrlr_loss_timeout_sec": 0, 00:20:10.379 "reconnect_delay_sec": 0, 00:20:10.379 "fast_io_fail_timeout_sec": 0, 00:20:10.379 "disable_auto_failback": false, 00:20:10.379 "generate_uuids": false, 00:20:10.379 "transport_tos": 0, 00:20:10.379 "nvme_error_stat": false, 00:20:10.379 "rdma_srq_size": 0, 00:20:10.379 "io_path_stat": false, 00:20:10.379 "allow_accel_sequence": false, 00:20:10.379 "rdma_max_cq_size": 0, 00:20:10.379 "rdma_cm_event_timeout_ms": 0, 00:20:10.379 "dhchap_digests": [ 00:20:10.379 "sha256", 00:20:10.379 "sha384", 00:20:10.379 "sha512" 00:20:10.379 ], 00:20:10.379 "dhchap_dhgroups": [ 00:20:10.379 "null", 00:20:10.379 "ffdhe2048", 00:20:10.379 "ffdhe3072", 00:20:10.379 "ffdhe4096", 00:20:10.379 "ffdhe6144", 00:20:10.379 "ffdhe8192" 00:20:10.379 ] 00:20:10.379 } 00:20:10.379 }, 00:20:10.379 { 00:20:10.379 "method": "bdev_nvme_attach_controller", 00:20:10.379 "params": { 00:20:10.379 "name": "nvme0", 00:20:10.379 "trtype": "TCP", 00:20:10.379 "adrfam": "IPv4", 00:20:10.379 "traddr": "10.0.0.2", 00:20:10.379 "trsvcid": "4420", 00:20:10.379 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.379 "prchk_reftag": false, 00:20:10.379 "prchk_guard": false, 00:20:10.379 "ctrlr_loss_timeout_sec": 0, 00:20:10.379 "reconnect_delay_sec": 0, 00:20:10.379 "fast_io_fail_timeout_sec": 0, 00:20:10.379 "psk": "key0", 00:20:10.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.379 "hdgst": false, 00:20:10.379 "ddgst": false, 00:20:10.379 "multipath": "multipath" 00:20:10.379 } 00:20:10.379 }, 00:20:10.379 { 00:20:10.379 "method": "bdev_nvme_set_hotplug", 00:20:10.379 "params": { 00:20:10.379 "period_us": 100000, 00:20:10.379 "enable": false 00:20:10.379 } 00:20:10.379 }, 00:20:10.379 { 00:20:10.379 "method": "bdev_enable_histogram", 00:20:10.379 "params": { 00:20:10.379 "name": "nvme0n1", 00:20:10.379 "enable": true 00:20:10.379 } 00:20:10.379 }, 00:20:10.379 { 00:20:10.379 "method": "bdev_wait_for_examine" 00:20:10.379 } 00:20:10.379 ] 00:20:10.379 }, 00:20:10.379 { 00:20:10.379 "subsystem": "nbd", 00:20:10.379 "config": [] 00:20:10.379 } 00:20:10.379 ] 00:20:10.379 }' 00:20:10.379 [2024-10-14 14:33:50.893214] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:20:10.379 [2024-10-14 14:33:50.893263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416911 ] 00:20:10.379 [2024-10-14 14:33:50.970145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.379 [2024-10-14 14:33:50.999992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.641 [2024-10-14 14:33:51.135388] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.211 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:11.211 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:11.211 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:11.211 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:11.211 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.211 14:33:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:11.211 Running I/O for 1 seconds... 00:20:12.594 5336.00 IOPS, 20.84 MiB/s 00:20:12.594 Latency(us) 00:20:12.594 [2024-10-14T12:33:53.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.594 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:12.594 Verification LBA range: start 0x0 length 0x2000 00:20:12.594 nvme0n1 : 1.02 5377.52 21.01 0.00 0.00 23606.99 5379.41 25340.59 00:20:12.594 [2024-10-14T12:33:53.321Z] =================================================================================================================== 00:20:12.594 [2024-10-14T12:33:53.321Z] Total : 5377.52 21.01 0.00 0.00 23606.99 5379.41 25340.59 00:20:12.594 { 00:20:12.594 "results": [ 00:20:12.594 { 00:20:12.594 "job": "nvme0n1", 00:20:12.594 "core_mask": "0x2", 00:20:12.594 "workload": "verify", 00:20:12.594 "status": "finished", 00:20:12.594 "verify_range": { 00:20:12.594 "start": 0, 00:20:12.594 "length": 8192 00:20:12.594 }, 00:20:12.594 "queue_depth": 128, 00:20:12.594 "io_size": 4096, 00:20:12.594 "runtime": 1.016081, 00:20:12.594 "iops": 5377.524035977447, 00:20:12.594 "mibps": 21.005953265536903, 00:20:12.594 "io_failed": 0, 00:20:12.594 "io_timeout": 0, 00:20:12.594 "avg_latency_us": 23606.988150317226, 00:20:12.594 "min_latency_us": 5379.413333333333, 00:20:12.594 "max_latency_us": 25340.586666666666 00:20:12.594 } 00:20:12.594 ], 00:20:12.594 "core_count": 1 00:20:12.594 } 00:20:12.594 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:12.594 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:12.594 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:12.594 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:12.594 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:12.594 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:12.594 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:12.594 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:12.594 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:12.594 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:12.594 14:33:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:12.594 nvmf_trace.0 00:20:12.594 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:12.594 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3416911 00:20:12.594 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3416911 ']' 00:20:12.594 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3416911 00:20:12.594 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:12.594 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.594 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3416911 00:20:12.594 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:12.594 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:12.594 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3416911' 00:20:12.594 killing process with pid 3416911 00:20:12.594 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3416911 00:20:12.594 Received shutdown signal, test time was about 1.000000 seconds 00:20:12.594 00:20:12.594 Latency(us) 00:20:12.594 [2024-10-14T12:33:53.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.594 [2024-10-14T12:33:53.321Z] =================================================================================================================== 00:20:12.594 [2024-10-14T12:33:53.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.594 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3416911 00:20:12.594 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:12.595 rmmod nvme_tcp 00:20:12.595 rmmod nvme_fabrics 00:20:12.595 rmmod nvme_keyring 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 3416567 ']' 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 3416567 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3416567 ']' 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3416567 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.595 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3416567 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3416567' 00:20:12.855 killing process with pid 3416567 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3416567 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3416567 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.855 14:33:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.403 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:15.403 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.RPRZuciqW3 /tmp/tmp.MYhGH3L4Y7 /tmp/tmp.po0y6q2UhN 00:20:15.403 00:20:15.403 real 1m21.954s 00:20:15.403 user 2m7.549s 00:20:15.403 sys 0m26.479s 00:20:15.403 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:15.403 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.403 ************************************ 00:20:15.403 END TEST nvmf_tls 00:20:15.403 ************************************ 00:20:15.403 14:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:15.403 14:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:15.403 14:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:15.403 14:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:15.403 ************************************ 00:20:15.403 START TEST nvmf_fips 00:20:15.403 ************************************ 00:20:15.403 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:15.403 * Looking for test storage... 00:20:15.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:15.403 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:15.403 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:15.403 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:15.403 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:15.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.404 --rc genhtml_branch_coverage=1 00:20:15.404 --rc genhtml_function_coverage=1 00:20:15.404 --rc genhtml_legend=1 00:20:15.404 --rc geninfo_all_blocks=1 00:20:15.404 --rc geninfo_unexecuted_blocks=1 00:20:15.404 00:20:15.404 ' 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:15.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.404 --rc genhtml_branch_coverage=1 00:20:15.404 --rc genhtml_function_coverage=1 00:20:15.404 --rc genhtml_legend=1 00:20:15.404 --rc geninfo_all_blocks=1 00:20:15.404 --rc geninfo_unexecuted_blocks=1 00:20:15.404 00:20:15.404 ' 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:15.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.404 --rc genhtml_branch_coverage=1 00:20:15.404 --rc genhtml_function_coverage=1 00:20:15.404 --rc genhtml_legend=1 00:20:15.404 --rc geninfo_all_blocks=1 00:20:15.404 --rc geninfo_unexecuted_blocks=1 00:20:15.404 00:20:15.404 ' 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:15.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:15.404 --rc genhtml_branch_coverage=1 00:20:15.404 --rc genhtml_function_coverage=1 00:20:15.404 --rc genhtml_legend=1 00:20:15.404 --rc geninfo_all_blocks=1 00:20:15.404 --rc geninfo_unexecuted_blocks=1 00:20:15.404 00:20:15.404 ' 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:15.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:15.404 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:15.405 14:33:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:15.405 Error setting digest 00:20:15.405 404276237C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:15.405 404276237C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:15.405 14:33:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:23.654 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:23.654 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:23.654 Found net devices under 0000:31:00.0: cvl_0_0 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:23.654 Found net devices under 0000:31:00.1: cvl_0_1 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:23.654 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:23.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:20:23.655 00:20:23.655 --- 10.0.0.2 ping statistics --- 00:20:23.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.655 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:23.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:20:23.655 00:20:23.655 --- 10.0.0.1 ping statistics --- 00:20:23.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.655 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=3421682 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 3421682 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3421682 ']' 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:23.655 14:34:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:23.655 [2024-10-14 14:34:03.596285] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:20:23.655 [2024-10-14 14:34:03.596354] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.655 [2024-10-14 14:34:03.687016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.655 [2024-10-14 14:34:03.736186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.655 [2024-10-14 14:34:03.736233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.655 [2024-10-14 14:34:03.736242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.655 [2024-10-14 14:34:03.736249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.655 [2024-10-14 14:34:03.736255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.655 [2024-10-14 14:34:03.737042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.c0O 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.c0O 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.c0O 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.c0O 00:20:23.950 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:23.950 [2024-10-14 14:34:04.594038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.950 [2024-10-14 14:34:04.610032] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:23.950 [2024-10-14 14:34:04.610353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.950 malloc0 00:20:24.224 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:24.224 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3421893 00:20:24.224 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3421893 /var/tmp/bdevperf.sock 00:20:24.224 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:24.224 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3421893 ']' 00:20:24.224 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.224 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.224 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.224 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.224 14:34:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:24.224 [2024-10-14 14:34:04.751677] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:20:24.224 [2024-10-14 14:34:04.751751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421893 ] 00:20:24.224 [2024-10-14 14:34:04.808865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.224 [2024-10-14 14:34:04.846758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.795 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:24.795 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:25.055 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.c0O 00:20:25.055 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:25.317 [2024-10-14 14:34:05.842490] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.317 TLSTESTn1 00:20:25.317 14:34:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:25.317 Running I/O for 10 seconds... 00:20:27.641 5456.00 IOPS, 21.31 MiB/s [2024-10-14T12:34:09.309Z] 5478.50 IOPS, 21.40 MiB/s [2024-10-14T12:34:10.253Z] 5530.00 IOPS, 21.60 MiB/s [2024-10-14T12:34:11.196Z] 5483.50 IOPS, 21.42 MiB/s [2024-10-14T12:34:12.139Z] 5468.80 IOPS, 21.36 MiB/s [2024-10-14T12:34:13.080Z] 5434.33 IOPS, 21.23 MiB/s [2024-10-14T12:34:14.465Z] 5406.14 IOPS, 21.12 MiB/s [2024-10-14T12:34:15.406Z] 5403.00 IOPS, 21.11 MiB/s [2024-10-14T12:34:16.349Z] 5398.56 IOPS, 21.09 MiB/s [2024-10-14T12:34:16.349Z] 5393.40 IOPS, 21.07 MiB/s 00:20:35.622 Latency(us) 00:20:35.622 [2024-10-14T12:34:16.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.622 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:35.622 Verification LBA range: start 0x0 length 0x2000 00:20:35.622 TLSTESTn1 : 10.01 5398.67 21.09 0.00 0.00 23678.65 5789.01 48496.64 00:20:35.622 [2024-10-14T12:34:16.349Z] =================================================================================================================== 00:20:35.622 [2024-10-14T12:34:16.349Z] Total : 5398.67 21.09 0.00 0.00 23678.65 5789.01 48496.64 00:20:35.622 { 00:20:35.622 "results": [ 00:20:35.622 { 00:20:35.622 "job": "TLSTESTn1", 00:20:35.622 "core_mask": "0x4", 00:20:35.622 "workload": "verify", 00:20:35.622 "status": "finished", 00:20:35.622 "verify_range": { 00:20:35.622 "start": 0, 00:20:35.622 "length": 8192 00:20:35.622 }, 00:20:35.622 "queue_depth": 128, 00:20:35.623 "io_size": 4096, 00:20:35.623 "runtime": 10.013754, 00:20:35.623 "iops": 5398.674662868691, 00:20:35.623 "mibps": 21.088572901830823, 00:20:35.623 "io_failed": 0, 00:20:35.623 "io_timeout": 0, 00:20:35.623 "avg_latency_us": 23678.650886961026, 00:20:35.623 "min_latency_us": 5789.013333333333, 00:20:35.623 "max_latency_us": 48496.64 00:20:35.623 } 00:20:35.623 ], 00:20:35.623 "core_count": 1 00:20:35.623 } 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:35.623 nvmf_trace.0 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3421893 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3421893 ']' 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3421893 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3421893 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3421893' 00:20:35.623 killing process with pid 3421893 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3421893 00:20:35.623 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.623 00:20:35.623 Latency(us) 00:20:35.623 [2024-10-14T12:34:16.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.623 [2024-10-14T12:34:16.350Z] =================================================================================================================== 00:20:35.623 [2024-10-14T12:34:16.350Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.623 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3421893 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:35.884 rmmod nvme_tcp 00:20:35.884 rmmod nvme_fabrics 00:20:35.884 rmmod nvme_keyring 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 3421682 ']' 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 3421682 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3421682 ']' 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3421682 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3421682 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3421682' 00:20:35.884 killing process with pid 3421682 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3421682 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3421682 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.884 14:34:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.c0O 00:20:38.432 00:20:38.432 real 0m23.024s 00:20:38.432 user 0m24.073s 00:20:38.432 sys 0m10.117s 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.432 ************************************ 00:20:38.432 END TEST nvmf_fips 00:20:38.432 ************************************ 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:38.432 ************************************ 00:20:38.432 START TEST nvmf_control_msg_list 00:20:38.432 ************************************ 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:38.432 * Looking for test storage... 00:20:38.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:38.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.432 --rc genhtml_branch_coverage=1 00:20:38.432 --rc genhtml_function_coverage=1 00:20:38.432 --rc genhtml_legend=1 00:20:38.432 --rc geninfo_all_blocks=1 00:20:38.432 --rc geninfo_unexecuted_blocks=1 00:20:38.432 00:20:38.432 ' 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:38.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.432 --rc genhtml_branch_coverage=1 00:20:38.432 --rc genhtml_function_coverage=1 00:20:38.432 --rc genhtml_legend=1 00:20:38.432 --rc geninfo_all_blocks=1 00:20:38.432 --rc geninfo_unexecuted_blocks=1 00:20:38.432 00:20:38.432 ' 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:38.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.432 --rc genhtml_branch_coverage=1 00:20:38.432 --rc genhtml_function_coverage=1 00:20:38.432 --rc genhtml_legend=1 00:20:38.432 --rc geninfo_all_blocks=1 00:20:38.432 --rc geninfo_unexecuted_blocks=1 00:20:38.432 00:20:38.432 ' 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:38.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.432 --rc genhtml_branch_coverage=1 00:20:38.432 --rc genhtml_function_coverage=1 00:20:38.432 --rc genhtml_legend=1 00:20:38.432 --rc geninfo_all_blocks=1 00:20:38.432 --rc geninfo_unexecuted_blocks=1 00:20:38.432 00:20:38.432 ' 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.432 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:38.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.433 14:34:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.433 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:38.433 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:38.433 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:38.433 14:34:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:46.583 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:46.583 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:46.583 Found net devices under 0000:31:00.0: cvl_0_0 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:46.583 Found net devices under 0000:31:00.1: cvl_0_1 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.583 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:46.584 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:46.584 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:46.584 14:34:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:46.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:20:46.584 00:20:46.584 --- 10.0.0.2 ping statistics --- 00:20:46.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.584 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:20:46.584 00:20:46.584 --- 10.0.0.1 ping statistics --- 00:20:46.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.584 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=3428457 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 3428457 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 3428457 ']' 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:46.584 14:34:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:46.584 [2024-10-14 14:34:26.378448] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:20:46.584 [2024-10-14 14:34:26.378502] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.584 [2024-10-14 14:34:26.445452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.584 [2024-10-14 14:34:26.480361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.584 [2024-10-14 14:34:26.480393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.584 [2024-10-14 14:34:26.480401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.584 [2024-10-14 14:34:26.480408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.584 [2024-10-14 14:34:26.480414] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.584 [2024-10-14 14:34:26.480979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:46.584 [2024-10-14 14:34:27.204985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:46.584 Malloc0 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:46.584 [2024-10-14 14:34:27.255887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3428634 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3428636 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3428637 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3428634 00:20:46.584 14:34:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:46.846 [2024-10-14 14:34:27.316213] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:46.846 [2024-10-14 14:34:27.336403] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:46.846 [2024-10-14 14:34:27.336693] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:47.788 Initializing NVMe Controllers 00:20:47.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:47.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:47.788 Initialization complete. Launching workers. 00:20:47.788 ======================================================== 00:20:47.788 Latency(us) 00:20:47.788 Device Information : IOPS MiB/s Average min max 00:20:47.788 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40895.19 40680.50 40974.15 00:20:47.788 ======================================================== 00:20:47.788 Total : 25.00 0.10 40895.19 40680.50 40974.15 00:20:47.788 00:20:47.788 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3428636 00:20:47.788 Initializing NVMe Controllers 00:20:47.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:47.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:47.788 Initialization complete. Launching workers. 00:20:47.788 ======================================================== 00:20:47.788 Latency(us) 00:20:47.788 Device Information : IOPS MiB/s Average min max 00:20:47.788 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 71.00 0.28 14653.57 282.88 41106.44 00:20:47.788 ======================================================== 00:20:47.788 Total : 71.00 0.28 14653.57 282.88 41106.44 00:20:47.788 00:20:47.788 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3428637 00:20:47.788 Initializing NVMe Controllers 00:20:47.788 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:47.788 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:47.788 Initialization complete. Launching workers. 00:20:47.788 ======================================================== 00:20:47.788 Latency(us) 00:20:47.788 Device Information : IOPS MiB/s Average min max 00:20:47.788 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40924.48 40792.23 41428.87 00:20:47.788 ======================================================== 00:20:47.788 Total : 25.00 0.10 40924.48 40792.23 41428.87 00:20:47.788 00:20:47.788 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:47.788 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:47.788 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:47.788 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:47.788 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:47.788 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:47.788 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:47.788 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:47.788 rmmod nvme_tcp 00:20:47.788 rmmod nvme_fabrics 00:20:48.050 rmmod nvme_keyring 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 3428457 ']' 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 3428457 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 3428457 ']' 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 3428457 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3428457 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3428457' 00:20:48.050 killing process with pid 3428457 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 3428457 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 3428457 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.050 14:34:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.600 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:50.600 00:20:50.600 real 0m12.061s 00:20:50.600 user 0m7.769s 00:20:50.600 sys 0m6.208s 00:20:50.600 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:50.600 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:50.600 ************************************ 00:20:50.600 END TEST nvmf_control_msg_list 00:20:50.600 ************************************ 00:20:50.600 14:34:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:50.600 14:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:50.600 14:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:50.600 14:34:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:50.600 ************************************ 00:20:50.600 START TEST nvmf_wait_for_buf 00:20:50.600 ************************************ 00:20:50.600 14:34:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:50.600 * Looking for test storage... 00:20:50.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:50.600 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.601 --rc genhtml_branch_coverage=1 00:20:50.601 --rc genhtml_function_coverage=1 00:20:50.601 --rc genhtml_legend=1 00:20:50.601 --rc geninfo_all_blocks=1 00:20:50.601 --rc geninfo_unexecuted_blocks=1 00:20:50.601 00:20:50.601 ' 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.601 --rc genhtml_branch_coverage=1 00:20:50.601 --rc genhtml_function_coverage=1 00:20:50.601 --rc genhtml_legend=1 00:20:50.601 --rc geninfo_all_blocks=1 00:20:50.601 --rc geninfo_unexecuted_blocks=1 00:20:50.601 00:20:50.601 ' 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.601 --rc genhtml_branch_coverage=1 00:20:50.601 --rc genhtml_function_coverage=1 00:20:50.601 --rc genhtml_legend=1 00:20:50.601 --rc geninfo_all_blocks=1 00:20:50.601 --rc geninfo_unexecuted_blocks=1 00:20:50.601 00:20:50.601 ' 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:50.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.601 --rc genhtml_branch_coverage=1 00:20:50.601 --rc genhtml_function_coverage=1 00:20:50.601 --rc genhtml_legend=1 00:20:50.601 --rc geninfo_all_blocks=1 00:20:50.601 --rc geninfo_unexecuted_blocks=1 00:20:50.601 00:20:50.601 ' 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:50.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:50.601 14:34:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:58.752 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.752 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:58.752 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:58.752 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:58.752 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:58.752 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:58.752 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:58.752 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:58.753 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:58.753 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:58.753 Found net devices under 0000:31:00.0: cvl_0_0 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:58.753 Found net devices under 0000:31:00.1: cvl_0_1 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:58.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:20:58.753 00:20:58.753 --- 10.0.0.2 ping statistics --- 00:20:58.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.753 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:20:58.753 00:20:58.753 --- 10.0.0.1 ping statistics --- 00:20:58.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.753 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:58.753 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:58.754 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=3433202 00:20:58.754 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 3433202 00:20:58.754 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:58.754 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 3433202 ']' 00:20:58.754 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.754 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:58.754 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.754 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:58.754 14:34:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:58.754 [2024-10-14 14:34:38.730018] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:20:58.754 [2024-10-14 14:34:38.730119] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.754 [2024-10-14 14:34:38.803109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.754 [2024-10-14 14:34:38.844439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.754 [2024-10-14 14:34:38.844475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.754 [2024-10-14 14:34:38.844483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.754 [2024-10-14 14:34:38.844489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.754 [2024-10-14 14:34:38.844499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.754 [2024-10-14 14:34:38.845116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.015 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:59.015 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:59.016 Malloc0 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:59.016 [2024-10-14 14:34:39.655904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:59.016 [2024-10-14 14:34:39.692113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.016 14:34:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:59.278 [2024-10-14 14:34:39.775141] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:00.665 Initializing NVMe Controllers 00:21:00.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:00.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:00.665 Initialization complete. Launching workers. 00:21:00.665 ======================================================== 00:21:00.665 Latency(us) 00:21:00.665 Device Information : IOPS MiB/s Average min max 00:21:00.665 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 126.79 15.85 32678.14 7992.85 63851.29 00:21:00.665 ======================================================== 00:21:00.665 Total : 126.79 15.85 32678.14 7992.85 63851.29 00:21:00.665 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2006 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2006 -eq 0 ]] 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:00.665 rmmod nvme_tcp 00:21:00.665 rmmod nvme_fabrics 00:21:00.665 rmmod nvme_keyring 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 3433202 ']' 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 3433202 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 3433202 ']' 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 3433202 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:21:00.665 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:00.926 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3433202 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3433202' 00:21:00.927 killing process with pid 3433202 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 3433202 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 3433202 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.927 14:34:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.475 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:03.475 00:21:03.475 real 0m12.737s 00:21:03.475 user 0m5.260s 00:21:03.475 sys 0m6.015s 00:21:03.475 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:03.475 14:34:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:03.475 ************************************ 00:21:03.475 END TEST nvmf_wait_for_buf 00:21:03.475 ************************************ 00:21:03.475 14:34:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:03.475 14:34:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:03.475 14:34:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:03.475 14:34:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:03.475 14:34:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:03.475 14:34:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:11.621 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:11.621 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:11.621 Found net devices under 0000:31:00.0: cvl_0_0 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:11.621 Found net devices under 0000:31:00.1: cvl_0_1 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:11.621 ************************************ 00:21:11.621 START TEST nvmf_perf_adq 00:21:11.621 ************************************ 00:21:11.621 14:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:11.621 * Looking for test storage... 00:21:11.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:11.621 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:11.621 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:11.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.622 --rc genhtml_branch_coverage=1 00:21:11.622 --rc genhtml_function_coverage=1 00:21:11.622 --rc genhtml_legend=1 00:21:11.622 --rc geninfo_all_blocks=1 00:21:11.622 --rc geninfo_unexecuted_blocks=1 00:21:11.622 00:21:11.622 ' 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:11.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.622 --rc genhtml_branch_coverage=1 00:21:11.622 --rc genhtml_function_coverage=1 00:21:11.622 --rc genhtml_legend=1 00:21:11.622 --rc geninfo_all_blocks=1 00:21:11.622 --rc geninfo_unexecuted_blocks=1 00:21:11.622 00:21:11.622 ' 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:11.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.622 --rc genhtml_branch_coverage=1 00:21:11.622 --rc genhtml_function_coverage=1 00:21:11.622 --rc genhtml_legend=1 00:21:11.622 --rc geninfo_all_blocks=1 00:21:11.622 --rc geninfo_unexecuted_blocks=1 00:21:11.622 00:21:11.622 ' 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:11.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.622 --rc genhtml_branch_coverage=1 00:21:11.622 --rc genhtml_function_coverage=1 00:21:11.622 --rc genhtml_legend=1 00:21:11.622 --rc geninfo_all_blocks=1 00:21:11.622 --rc geninfo_unexecuted_blocks=1 00:21:11.622 00:21:11.622 ' 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:11.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:11.622 14:34:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:18.206 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:18.207 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:18.207 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:18.207 Found net devices under 0000:31:00.0: cvl_0_0 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:18.207 Found net devices under 0000:31:00.1: cvl_0_1 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:18.207 14:34:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:19.589 14:34:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:21.502 14:35:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:26.781 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:26.781 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:26.781 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:26.782 Found net devices under 0000:31:00.0: cvl_0_0 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:26.782 Found net devices under 0000:31:00.1: cvl_0_1 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:26.782 14:35:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:26.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:21:26.782 00:21:26.782 --- 10.0.0.2 ping statistics --- 00:21:26.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.782 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:26.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:21:26.782 00:21:26.782 --- 10.0.0.1 ping statistics --- 00:21:26.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.782 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=3443569 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 3443569 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3443569 ']' 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:26.782 14:35:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.782 [2024-10-14 14:35:07.406704] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:21:26.782 [2024-10-14 14:35:07.406767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.782 [2024-10-14 14:35:07.482757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.042 [2024-10-14 14:35:07.527594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.042 [2024-10-14 14:35:07.527628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.042 [2024-10-14 14:35:07.527637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.042 [2024-10-14 14:35:07.527645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.042 [2024-10-14 14:35:07.527651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.042 [2024-10-14 14:35:07.529233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.042 [2024-10-14 14:35:07.529371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.042 [2024-10-14 14:35:07.529534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.042 [2024-10-14 14:35:07.529535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.611 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.871 [2024-10-14 14:35:08.375093] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.871 Malloc1 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:27.871 [2024-10-14 14:35:08.456430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3443922 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:27.871 14:35:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:29.780 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:29.780 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.780 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.780 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.780 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:29.780 "tick_rate": 2400000000, 00:21:29.780 "poll_groups": [ 00:21:29.780 { 00:21:29.780 "name": "nvmf_tgt_poll_group_000", 00:21:29.780 "admin_qpairs": 1, 00:21:29.780 "io_qpairs": 1, 00:21:29.780 "current_admin_qpairs": 1, 00:21:29.780 "current_io_qpairs": 1, 00:21:29.780 "pending_bdev_io": 0, 00:21:29.780 "completed_nvme_io": 20228, 00:21:29.780 "transports": [ 00:21:29.780 { 00:21:29.780 "trtype": "TCP" 00:21:29.780 } 00:21:29.780 ] 00:21:29.780 }, 00:21:29.780 { 00:21:29.780 "name": "nvmf_tgt_poll_group_001", 00:21:29.780 "admin_qpairs": 0, 00:21:29.780 "io_qpairs": 1, 00:21:29.780 "current_admin_qpairs": 0, 00:21:29.780 "current_io_qpairs": 1, 00:21:29.780 "pending_bdev_io": 0, 00:21:29.780 "completed_nvme_io": 27137, 00:21:29.780 "transports": [ 00:21:29.780 { 00:21:29.780 "trtype": "TCP" 00:21:29.780 } 00:21:29.780 ] 00:21:29.780 }, 00:21:29.780 { 00:21:29.780 "name": "nvmf_tgt_poll_group_002", 00:21:29.780 "admin_qpairs": 0, 00:21:29.780 "io_qpairs": 1, 00:21:29.780 "current_admin_qpairs": 0, 00:21:29.780 "current_io_qpairs": 1, 00:21:29.780 "pending_bdev_io": 0, 00:21:29.780 "completed_nvme_io": 19826, 00:21:29.780 "transports": [ 00:21:29.780 { 00:21:29.780 "trtype": "TCP" 00:21:29.780 } 00:21:29.780 ] 00:21:29.780 }, 00:21:29.780 { 00:21:29.780 "name": "nvmf_tgt_poll_group_003", 00:21:29.780 "admin_qpairs": 0, 00:21:29.780 "io_qpairs": 1, 00:21:29.780 "current_admin_qpairs": 0, 00:21:29.780 "current_io_qpairs": 1, 00:21:29.780 "pending_bdev_io": 0, 00:21:29.780 "completed_nvme_io": 20343, 00:21:29.780 "transports": [ 00:21:29.780 { 00:21:29.780 "trtype": "TCP" 00:21:29.780 } 00:21:29.780 ] 00:21:29.780 } 00:21:29.780 ] 00:21:29.780 }' 00:21:29.780 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:29.780 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:30.041 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:30.041 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:30.041 14:35:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3443922 00:21:38.170 Initializing NVMe Controllers 00:21:38.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:38.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:38.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:38.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:38.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:38.170 Initialization complete. Launching workers. 00:21:38.170 ======================================================== 00:21:38.170 Latency(us) 00:21:38.170 Device Information : IOPS MiB/s Average min max 00:21:38.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11244.20 43.92 5691.95 2153.56 9502.31 00:21:38.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14165.70 55.33 4518.33 1298.10 10169.81 00:21:38.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13774.10 53.81 4646.15 1283.43 10581.38 00:21:38.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14205.50 55.49 4504.91 1359.52 11010.53 00:21:38.170 ======================================================== 00:21:38.170 Total : 53389.49 208.55 4794.91 1283.43 11010.53 00:21:38.170 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:38.170 rmmod nvme_tcp 00:21:38.170 rmmod nvme_fabrics 00:21:38.170 rmmod nvme_keyring 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 3443569 ']' 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 3443569 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3443569 ']' 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3443569 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3443569 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3443569' 00:21:38.170 killing process with pid 3443569 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3443569 00:21:38.170 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3443569 00:21:38.430 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:38.430 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:38.430 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:38.430 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:38.430 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:38.430 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:38.430 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:38.430 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:38.430 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:38.430 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.430 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.430 14:35:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.338 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:40.338 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:40.338 14:35:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:40.338 14:35:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:42.246 14:35:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:44.151 14:35:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:49.429 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:49.429 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:49.429 Found net devices under 0000:31:00.0: cvl_0_0 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:49.429 Found net devices under 0000:31:00.1: cvl_0_1 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:49.429 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:49.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:21:49.430 00:21:49.430 --- 10.0.0.2 ping statistics --- 00:21:49.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.430 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:21:49.430 00:21:49.430 --- 10.0.0.1 ping statistics --- 00:21:49.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.430 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:49.430 net.core.busy_poll = 1 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:49.430 net.core.busy_read = 1 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:49.430 14:35:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:49.430 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:49.430 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:49.690 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:49.690 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:49.690 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:49.690 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:49.690 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.690 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=3448441 00:21:49.690 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 3448441 00:21:49.690 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:49.690 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3448441 ']' 00:21:49.690 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.690 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:49.690 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.690 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:49.690 14:35:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.690 [2024-10-14 14:35:30.294972] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:21:49.690 [2024-10-14 14:35:30.295043] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.690 [2024-10-14 14:35:30.369558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.690 [2024-10-14 14:35:30.412792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.690 [2024-10-14 14:35:30.412828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.690 [2024-10-14 14:35:30.412840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.690 [2024-10-14 14:35:30.412846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.690 [2024-10-14 14:35:30.412852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.690 [2024-10-14 14:35:30.414743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.690 [2024-10-14 14:35:30.414861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.690 [2024-10-14 14:35:30.415018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.690 [2024-10-14 14:35:30.415019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.631 [2024-10-14 14:35:31.256425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.631 Malloc1 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.631 [2024-10-14 14:35:31.322409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3448744 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:50.631 14:35:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:53.176 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:53.176 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.176 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.176 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.176 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:53.176 "tick_rate": 2400000000, 00:21:53.176 "poll_groups": [ 00:21:53.176 { 00:21:53.176 "name": "nvmf_tgt_poll_group_000", 00:21:53.176 "admin_qpairs": 1, 00:21:53.176 "io_qpairs": 4, 00:21:53.176 "current_admin_qpairs": 1, 00:21:53.176 "current_io_qpairs": 4, 00:21:53.176 "pending_bdev_io": 0, 00:21:53.176 "completed_nvme_io": 35406, 00:21:53.176 "transports": [ 00:21:53.176 { 00:21:53.176 "trtype": "TCP" 00:21:53.176 } 00:21:53.176 ] 00:21:53.176 }, 00:21:53.176 { 00:21:53.176 "name": "nvmf_tgt_poll_group_001", 00:21:53.176 "admin_qpairs": 0, 00:21:53.176 "io_qpairs": 0, 00:21:53.176 "current_admin_qpairs": 0, 00:21:53.176 "current_io_qpairs": 0, 00:21:53.176 "pending_bdev_io": 0, 00:21:53.176 "completed_nvme_io": 0, 00:21:53.176 "transports": [ 00:21:53.176 { 00:21:53.176 "trtype": "TCP" 00:21:53.176 } 00:21:53.176 ] 00:21:53.176 }, 00:21:53.176 { 00:21:53.176 "name": "nvmf_tgt_poll_group_002", 00:21:53.176 "admin_qpairs": 0, 00:21:53.176 "io_qpairs": 0, 00:21:53.176 "current_admin_qpairs": 0, 00:21:53.176 "current_io_qpairs": 0, 00:21:53.176 "pending_bdev_io": 0, 00:21:53.176 "completed_nvme_io": 0, 00:21:53.176 "transports": [ 00:21:53.176 { 00:21:53.176 "trtype": "TCP" 00:21:53.176 } 00:21:53.176 ] 00:21:53.176 }, 00:21:53.176 { 00:21:53.176 "name": "nvmf_tgt_poll_group_003", 00:21:53.176 "admin_qpairs": 0, 00:21:53.176 "io_qpairs": 0, 00:21:53.176 "current_admin_qpairs": 0, 00:21:53.176 "current_io_qpairs": 0, 00:21:53.176 "pending_bdev_io": 0, 00:21:53.176 "completed_nvme_io": 0, 00:21:53.176 "transports": [ 00:21:53.176 { 00:21:53.176 "trtype": "TCP" 00:21:53.176 } 00:21:53.176 ] 00:21:53.176 } 00:21:53.176 ] 00:21:53.176 }' 00:21:53.176 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:53.176 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:53.176 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:21:53.176 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:21:53.176 14:35:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3448744 00:22:01.311 Initializing NVMe Controllers 00:22:01.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:01.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:01.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:01.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:01.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:01.311 Initialization complete. Launching workers. 00:22:01.311 ======================================================== 00:22:01.311 Latency(us) 00:22:01.311 Device Information : IOPS MiB/s Average min max 00:22:01.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5854.00 22.87 10938.47 1394.66 59196.78 00:22:01.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5994.80 23.42 10679.30 1460.01 59074.77 00:22:01.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8702.30 33.99 7356.98 1074.54 55282.47 00:22:01.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4991.40 19.50 12822.91 2011.56 62240.21 00:22:01.311 ======================================================== 00:22:01.311 Total : 25542.49 99.78 10025.68 1074.54 62240.21 00:22:01.311 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:01.311 rmmod nvme_tcp 00:22:01.311 rmmod nvme_fabrics 00:22:01.311 rmmod nvme_keyring 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 3448441 ']' 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 3448441 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3448441 ']' 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3448441 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3448441 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3448441' 00:22:01.311 killing process with pid 3448441 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3448441 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3448441 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:01.311 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:01.312 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.312 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.312 14:35:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.284 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:03.284 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:03.284 00:22:03.284 real 0m52.861s 00:22:03.284 user 2m49.337s 00:22:03.284 sys 0m11.410s 00:22:03.284 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:03.284 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:03.284 ************************************ 00:22:03.284 END TEST nvmf_perf_adq 00:22:03.284 ************************************ 00:22:03.284 14:35:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:03.284 14:35:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:03.284 14:35:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:03.284 14:35:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:03.284 ************************************ 00:22:03.284 START TEST nvmf_shutdown 00:22:03.284 ************************************ 00:22:03.284 14:35:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:03.580 * Looking for test storage... 00:22:03.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:03.580 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:03.580 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:03.580 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:03.580 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:03.580 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:03.580 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:03.580 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:03.580 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:03.580 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:03.580 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:03.580 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:03.580 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:03.580 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:03.580 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:03.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.581 --rc genhtml_branch_coverage=1 00:22:03.581 --rc genhtml_function_coverage=1 00:22:03.581 --rc genhtml_legend=1 00:22:03.581 --rc geninfo_all_blocks=1 00:22:03.581 --rc geninfo_unexecuted_blocks=1 00:22:03.581 00:22:03.581 ' 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:03.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.581 --rc genhtml_branch_coverage=1 00:22:03.581 --rc genhtml_function_coverage=1 00:22:03.581 --rc genhtml_legend=1 00:22:03.581 --rc geninfo_all_blocks=1 00:22:03.581 --rc geninfo_unexecuted_blocks=1 00:22:03.581 00:22:03.581 ' 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:03.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.581 --rc genhtml_branch_coverage=1 00:22:03.581 --rc genhtml_function_coverage=1 00:22:03.581 --rc genhtml_legend=1 00:22:03.581 --rc geninfo_all_blocks=1 00:22:03.581 --rc geninfo_unexecuted_blocks=1 00:22:03.581 00:22:03.581 ' 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:03.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:03.581 --rc genhtml_branch_coverage=1 00:22:03.581 --rc genhtml_function_coverage=1 00:22:03.581 --rc genhtml_legend=1 00:22:03.581 --rc geninfo_all_blocks=1 00:22:03.581 --rc geninfo_unexecuted_blocks=1 00:22:03.581 00:22:03.581 ' 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:03.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:03.581 ************************************ 00:22:03.581 START TEST nvmf_shutdown_tc1 00:22:03.581 ************************************ 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:03.581 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:03.582 14:35:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.770 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:11.771 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:11.771 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:11.771 Found net devices under 0000:31:00.0: cvl_0_0 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:11.771 Found net devices under 0000:31:00.1: cvl_0_1 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:11.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:22:11.771 00:22:11.771 --- 10.0.0.2 ping statistics --- 00:22:11.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.771 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:22:11.771 00:22:11.771 --- 10.0.0.1 ping statistics --- 00:22:11.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.771 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=3455265 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 3455265 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3455265 ']' 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:11.771 14:35:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:11.771 [2024-10-14 14:35:51.980461] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:22:11.772 [2024-10-14 14:35:51.980526] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.772 [2024-10-14 14:35:52.070788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.772 [2024-10-14 14:35:52.122339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.772 [2024-10-14 14:35:52.122391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.772 [2024-10-14 14:35:52.122400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.772 [2024-10-14 14:35:52.122407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.772 [2024-10-14 14:35:52.122413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.772 [2024-10-14 14:35:52.124425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.772 [2024-10-14 14:35:52.124593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.772 [2024-10-14 14:35:52.124761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.772 [2024-10-14 14:35:52.124762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:12.344 [2024-10-14 14:35:52.832150] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.344 14:35:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:12.344 Malloc1 00:22:12.344 [2024-10-14 14:35:52.954577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.344 Malloc2 00:22:12.344 Malloc3 00:22:12.344 Malloc4 00:22:12.604 Malloc5 00:22:12.604 Malloc6 00:22:12.604 Malloc7 00:22:12.604 Malloc8 00:22:12.604 Malloc9 00:22:12.604 Malloc10 00:22:12.604 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.604 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:12.604 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:12.604 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:12.865 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3455503 00:22:12.865 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3455503 /var/tmp/bdevperf.sock 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3455503 ']' 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.866 { 00:22:12.866 "params": { 00:22:12.866 "name": "Nvme$subsystem", 00:22:12.866 "trtype": "$TEST_TRANSPORT", 00:22:12.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.866 "adrfam": "ipv4", 00:22:12.866 "trsvcid": "$NVMF_PORT", 00:22:12.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.866 "hdgst": ${hdgst:-false}, 00:22:12.866 "ddgst": ${ddgst:-false} 00:22:12.866 }, 00:22:12.866 "method": "bdev_nvme_attach_controller" 00:22:12.866 } 00:22:12.866 EOF 00:22:12.866 )") 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.866 { 00:22:12.866 "params": { 00:22:12.866 "name": "Nvme$subsystem", 00:22:12.866 "trtype": "$TEST_TRANSPORT", 00:22:12.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.866 "adrfam": "ipv4", 00:22:12.866 "trsvcid": "$NVMF_PORT", 00:22:12.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.866 "hdgst": ${hdgst:-false}, 00:22:12.866 "ddgst": ${ddgst:-false} 00:22:12.866 }, 00:22:12.866 "method": "bdev_nvme_attach_controller" 00:22:12.866 } 00:22:12.866 EOF 00:22:12.866 )") 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.866 { 00:22:12.866 "params": { 00:22:12.866 "name": "Nvme$subsystem", 00:22:12.866 "trtype": "$TEST_TRANSPORT", 00:22:12.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.866 "adrfam": "ipv4", 00:22:12.866 "trsvcid": "$NVMF_PORT", 00:22:12.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.866 "hdgst": ${hdgst:-false}, 00:22:12.866 "ddgst": ${ddgst:-false} 00:22:12.866 }, 00:22:12.866 "method": "bdev_nvme_attach_controller" 00:22:12.866 } 00:22:12.866 EOF 00:22:12.866 )") 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.866 { 00:22:12.866 "params": { 00:22:12.866 "name": "Nvme$subsystem", 00:22:12.866 "trtype": "$TEST_TRANSPORT", 00:22:12.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.866 "adrfam": "ipv4", 00:22:12.866 "trsvcid": "$NVMF_PORT", 00:22:12.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.866 "hdgst": ${hdgst:-false}, 00:22:12.866 "ddgst": ${ddgst:-false} 00:22:12.866 }, 00:22:12.866 "method": "bdev_nvme_attach_controller" 00:22:12.866 } 00:22:12.866 EOF 00:22:12.866 )") 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.866 { 00:22:12.866 "params": { 00:22:12.866 "name": "Nvme$subsystem", 00:22:12.866 "trtype": "$TEST_TRANSPORT", 00:22:12.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.866 "adrfam": "ipv4", 00:22:12.866 "trsvcid": "$NVMF_PORT", 00:22:12.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.866 "hdgst": ${hdgst:-false}, 00:22:12.866 "ddgst": ${ddgst:-false} 00:22:12.866 }, 00:22:12.866 "method": "bdev_nvme_attach_controller" 00:22:12.866 } 00:22:12.866 EOF 00:22:12.866 )") 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.866 { 00:22:12.866 "params": { 00:22:12.866 "name": "Nvme$subsystem", 00:22:12.866 "trtype": "$TEST_TRANSPORT", 00:22:12.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.866 "adrfam": "ipv4", 00:22:12.866 "trsvcid": "$NVMF_PORT", 00:22:12.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.866 "hdgst": ${hdgst:-false}, 00:22:12.866 "ddgst": ${ddgst:-false} 00:22:12.866 }, 00:22:12.866 "method": "bdev_nvme_attach_controller" 00:22:12.866 } 00:22:12.866 EOF 00:22:12.866 )") 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:12.866 [2024-10-14 14:35:53.408038] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:22:12.866 [2024-10-14 14:35:53.408099] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.866 { 00:22:12.866 "params": { 00:22:12.866 "name": "Nvme$subsystem", 00:22:12.866 "trtype": "$TEST_TRANSPORT", 00:22:12.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.866 "adrfam": "ipv4", 00:22:12.866 "trsvcid": "$NVMF_PORT", 00:22:12.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.866 "hdgst": ${hdgst:-false}, 00:22:12.866 "ddgst": ${ddgst:-false} 00:22:12.866 }, 00:22:12.866 "method": "bdev_nvme_attach_controller" 00:22:12.866 } 00:22:12.866 EOF 00:22:12.866 )") 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.866 { 00:22:12.866 "params": { 00:22:12.866 "name": "Nvme$subsystem", 00:22:12.866 "trtype": "$TEST_TRANSPORT", 00:22:12.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.866 "adrfam": "ipv4", 00:22:12.866 "trsvcid": "$NVMF_PORT", 00:22:12.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.866 "hdgst": ${hdgst:-false}, 00:22:12.866 "ddgst": ${ddgst:-false} 00:22:12.866 }, 00:22:12.866 "method": "bdev_nvme_attach_controller" 00:22:12.866 } 00:22:12.866 EOF 00:22:12.866 )") 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.866 { 00:22:12.866 "params": { 00:22:12.866 "name": "Nvme$subsystem", 00:22:12.866 "trtype": "$TEST_TRANSPORT", 00:22:12.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.866 "adrfam": "ipv4", 00:22:12.866 "trsvcid": "$NVMF_PORT", 00:22:12.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.866 "hdgst": ${hdgst:-false}, 00:22:12.866 "ddgst": ${ddgst:-false} 00:22:12.866 }, 00:22:12.866 "method": "bdev_nvme_attach_controller" 00:22:12.866 } 00:22:12.866 EOF 00:22:12.866 )") 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:12.866 { 00:22:12.866 "params": { 00:22:12.866 "name": "Nvme$subsystem", 00:22:12.866 "trtype": "$TEST_TRANSPORT", 00:22:12.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.866 "adrfam": "ipv4", 00:22:12.866 "trsvcid": "$NVMF_PORT", 00:22:12.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.866 "hdgst": ${hdgst:-false}, 00:22:12.866 "ddgst": ${ddgst:-false} 00:22:12.866 }, 00:22:12.866 "method": "bdev_nvme_attach_controller" 00:22:12.866 } 00:22:12.866 EOF 00:22:12.866 )") 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:12.866 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:12.867 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:12.867 14:35:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:12.867 "params": { 00:22:12.867 "name": "Nvme1", 00:22:12.867 "trtype": "tcp", 00:22:12.867 "traddr": "10.0.0.2", 00:22:12.867 "adrfam": "ipv4", 00:22:12.867 "trsvcid": "4420", 00:22:12.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.867 "hdgst": false, 00:22:12.867 "ddgst": false 00:22:12.867 }, 00:22:12.867 "method": "bdev_nvme_attach_controller" 00:22:12.867 },{ 00:22:12.867 "params": { 00:22:12.867 "name": "Nvme2", 00:22:12.867 "trtype": "tcp", 00:22:12.867 "traddr": "10.0.0.2", 00:22:12.867 "adrfam": "ipv4", 00:22:12.867 "trsvcid": "4420", 00:22:12.867 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:12.867 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:12.867 "hdgst": false, 00:22:12.867 "ddgst": false 00:22:12.867 }, 00:22:12.867 "method": "bdev_nvme_attach_controller" 00:22:12.867 },{ 00:22:12.867 "params": { 00:22:12.867 "name": "Nvme3", 00:22:12.867 "trtype": "tcp", 00:22:12.867 "traddr": "10.0.0.2", 00:22:12.867 "adrfam": "ipv4", 00:22:12.867 "trsvcid": "4420", 00:22:12.867 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:12.867 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:12.867 "hdgst": false, 00:22:12.867 "ddgst": false 00:22:12.867 }, 00:22:12.867 "method": "bdev_nvme_attach_controller" 00:22:12.867 },{ 00:22:12.867 "params": { 00:22:12.867 "name": "Nvme4", 00:22:12.867 "trtype": "tcp", 00:22:12.867 "traddr": "10.0.0.2", 00:22:12.867 "adrfam": "ipv4", 00:22:12.867 "trsvcid": "4420", 00:22:12.867 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:12.867 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:12.867 "hdgst": false, 00:22:12.867 "ddgst": false 00:22:12.867 }, 00:22:12.867 "method": "bdev_nvme_attach_controller" 00:22:12.867 },{ 00:22:12.867 "params": { 00:22:12.867 "name": "Nvme5", 00:22:12.867 "trtype": "tcp", 00:22:12.867 "traddr": "10.0.0.2", 00:22:12.867 "adrfam": "ipv4", 00:22:12.867 "trsvcid": "4420", 00:22:12.867 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:12.867 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:12.867 "hdgst": false, 00:22:12.867 "ddgst": false 00:22:12.867 }, 00:22:12.867 "method": "bdev_nvme_attach_controller" 00:22:12.867 },{ 00:22:12.867 "params": { 00:22:12.867 "name": "Nvme6", 00:22:12.867 "trtype": "tcp", 00:22:12.867 "traddr": "10.0.0.2", 00:22:12.867 "adrfam": "ipv4", 00:22:12.867 "trsvcid": "4420", 00:22:12.867 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:12.867 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:12.867 "hdgst": false, 00:22:12.867 "ddgst": false 00:22:12.867 }, 00:22:12.867 "method": "bdev_nvme_attach_controller" 00:22:12.867 },{ 00:22:12.867 "params": { 00:22:12.867 "name": "Nvme7", 00:22:12.867 "trtype": "tcp", 00:22:12.867 "traddr": "10.0.0.2", 00:22:12.867 "adrfam": "ipv4", 00:22:12.867 "trsvcid": "4420", 00:22:12.867 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:12.867 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:12.867 "hdgst": false, 00:22:12.867 "ddgst": false 00:22:12.867 }, 00:22:12.867 "method": "bdev_nvme_attach_controller" 00:22:12.867 },{ 00:22:12.867 "params": { 00:22:12.867 "name": "Nvme8", 00:22:12.867 "trtype": "tcp", 00:22:12.867 "traddr": "10.0.0.2", 00:22:12.867 "adrfam": "ipv4", 00:22:12.867 "trsvcid": "4420", 00:22:12.867 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:12.867 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:12.867 "hdgst": false, 00:22:12.867 "ddgst": false 00:22:12.867 }, 00:22:12.867 "method": "bdev_nvme_attach_controller" 00:22:12.867 },{ 00:22:12.867 "params": { 00:22:12.867 "name": "Nvme9", 00:22:12.867 "trtype": "tcp", 00:22:12.867 "traddr": "10.0.0.2", 00:22:12.867 "adrfam": "ipv4", 00:22:12.867 "trsvcid": "4420", 00:22:12.867 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:12.867 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:12.867 "hdgst": false, 00:22:12.867 "ddgst": false 00:22:12.867 }, 00:22:12.867 "method": "bdev_nvme_attach_controller" 00:22:12.867 },{ 00:22:12.867 "params": { 00:22:12.867 "name": "Nvme10", 00:22:12.867 "trtype": "tcp", 00:22:12.867 "traddr": "10.0.0.2", 00:22:12.867 "adrfam": "ipv4", 00:22:12.867 "trsvcid": "4420", 00:22:12.867 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:12.867 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:12.867 "hdgst": false, 00:22:12.867 "ddgst": false 00:22:12.867 }, 00:22:12.867 "method": "bdev_nvme_attach_controller" 00:22:12.867 }' 00:22:12.867 [2024-10-14 14:35:53.470744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.867 [2024-10-14 14:35:53.507205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.249 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:14.249 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:14.249 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:14.249 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.249 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:14.249 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.249 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3455503 00:22:14.249 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:14.249 14:35:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:15.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3455503 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3455265 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.189 { 00:22:15.189 "params": { 00:22:15.189 "name": "Nvme$subsystem", 00:22:15.189 "trtype": "$TEST_TRANSPORT", 00:22:15.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.189 "adrfam": "ipv4", 00:22:15.189 "trsvcid": "$NVMF_PORT", 00:22:15.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.189 "hdgst": ${hdgst:-false}, 00:22:15.189 "ddgst": ${ddgst:-false} 00:22:15.189 }, 00:22:15.189 "method": "bdev_nvme_attach_controller" 00:22:15.189 } 00:22:15.189 EOF 00:22:15.189 )") 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.189 { 00:22:15.189 "params": { 00:22:15.189 "name": "Nvme$subsystem", 00:22:15.189 "trtype": "$TEST_TRANSPORT", 00:22:15.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.189 "adrfam": "ipv4", 00:22:15.189 "trsvcid": "$NVMF_PORT", 00:22:15.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.189 "hdgst": ${hdgst:-false}, 00:22:15.189 "ddgst": ${ddgst:-false} 00:22:15.189 }, 00:22:15.189 "method": "bdev_nvme_attach_controller" 00:22:15.189 } 00:22:15.189 EOF 00:22:15.189 )") 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.189 { 00:22:15.189 "params": { 00:22:15.189 "name": "Nvme$subsystem", 00:22:15.189 "trtype": "$TEST_TRANSPORT", 00:22:15.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.189 "adrfam": "ipv4", 00:22:15.189 "trsvcid": "$NVMF_PORT", 00:22:15.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.189 "hdgst": ${hdgst:-false}, 00:22:15.189 "ddgst": ${ddgst:-false} 00:22:15.189 }, 00:22:15.189 "method": "bdev_nvme_attach_controller" 00:22:15.189 } 00:22:15.189 EOF 00:22:15.189 )") 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.189 { 00:22:15.189 "params": { 00:22:15.189 "name": "Nvme$subsystem", 00:22:15.189 "trtype": "$TEST_TRANSPORT", 00:22:15.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.189 "adrfam": "ipv4", 00:22:15.189 "trsvcid": "$NVMF_PORT", 00:22:15.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.189 "hdgst": ${hdgst:-false}, 00:22:15.189 "ddgst": ${ddgst:-false} 00:22:15.189 }, 00:22:15.189 "method": "bdev_nvme_attach_controller" 00:22:15.189 } 00:22:15.189 EOF 00:22:15.189 )") 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.189 { 00:22:15.189 "params": { 00:22:15.189 "name": "Nvme$subsystem", 00:22:15.189 "trtype": "$TEST_TRANSPORT", 00:22:15.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.189 "adrfam": "ipv4", 00:22:15.189 "trsvcid": "$NVMF_PORT", 00:22:15.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.189 "hdgst": ${hdgst:-false}, 00:22:15.189 "ddgst": ${ddgst:-false} 00:22:15.189 }, 00:22:15.189 "method": "bdev_nvme_attach_controller" 00:22:15.189 } 00:22:15.189 EOF 00:22:15.189 )") 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.189 { 00:22:15.189 "params": { 00:22:15.189 "name": "Nvme$subsystem", 00:22:15.189 "trtype": "$TEST_TRANSPORT", 00:22:15.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.189 "adrfam": "ipv4", 00:22:15.189 "trsvcid": "$NVMF_PORT", 00:22:15.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.189 "hdgst": ${hdgst:-false}, 00:22:15.189 "ddgst": ${ddgst:-false} 00:22:15.189 }, 00:22:15.189 "method": "bdev_nvme_attach_controller" 00:22:15.189 } 00:22:15.189 EOF 00:22:15.189 )") 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.189 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.189 { 00:22:15.189 "params": { 00:22:15.189 "name": "Nvme$subsystem", 00:22:15.189 "trtype": "$TEST_TRANSPORT", 00:22:15.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.189 "adrfam": "ipv4", 00:22:15.190 "trsvcid": "$NVMF_PORT", 00:22:15.190 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.190 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.190 "hdgst": ${hdgst:-false}, 00:22:15.190 "ddgst": ${ddgst:-false} 00:22:15.190 }, 00:22:15.190 "method": "bdev_nvme_attach_controller" 00:22:15.190 } 00:22:15.190 EOF 00:22:15.190 )") 00:22:15.190 [2024-10-14 14:35:55.915403] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:22:15.190 [2024-10-14 14:35:55.915461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456026 ] 00:22:15.190 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.450 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.450 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.450 { 00:22:15.450 "params": { 00:22:15.450 "name": "Nvme$subsystem", 00:22:15.450 "trtype": "$TEST_TRANSPORT", 00:22:15.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.450 "adrfam": "ipv4", 00:22:15.450 "trsvcid": "$NVMF_PORT", 00:22:15.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.450 "hdgst": ${hdgst:-false}, 00:22:15.450 "ddgst": ${ddgst:-false} 00:22:15.450 }, 00:22:15.450 "method": "bdev_nvme_attach_controller" 00:22:15.450 } 00:22:15.450 EOF 00:22:15.450 )") 00:22:15.450 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.450 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.450 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.450 { 00:22:15.450 "params": { 00:22:15.450 "name": "Nvme$subsystem", 00:22:15.450 "trtype": "$TEST_TRANSPORT", 00:22:15.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.450 "adrfam": "ipv4", 00:22:15.450 "trsvcid": "$NVMF_PORT", 00:22:15.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.450 "hdgst": ${hdgst:-false}, 00:22:15.450 "ddgst": ${ddgst:-false} 00:22:15.450 }, 00:22:15.450 "method": "bdev_nvme_attach_controller" 00:22:15.450 } 00:22:15.450 EOF 00:22:15.450 )") 00:22:15.450 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.450 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:15.451 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:15.451 { 00:22:15.451 "params": { 00:22:15.451 "name": "Nvme$subsystem", 00:22:15.451 "trtype": "$TEST_TRANSPORT", 00:22:15.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.451 "adrfam": "ipv4", 00:22:15.451 "trsvcid": "$NVMF_PORT", 00:22:15.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.451 "hdgst": ${hdgst:-false}, 00:22:15.451 "ddgst": ${ddgst:-false} 00:22:15.451 }, 00:22:15.451 "method": "bdev_nvme_attach_controller" 00:22:15.451 } 00:22:15.451 EOF 00:22:15.451 )") 00:22:15.451 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:22:15.451 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:22:15.451 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:22:15.451 14:35:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:15.451 "params": { 00:22:15.451 "name": "Nvme1", 00:22:15.451 "trtype": "tcp", 00:22:15.451 "traddr": "10.0.0.2", 00:22:15.451 "adrfam": "ipv4", 00:22:15.451 "trsvcid": "4420", 00:22:15.451 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.451 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.451 "hdgst": false, 00:22:15.451 "ddgst": false 00:22:15.451 }, 00:22:15.451 "method": "bdev_nvme_attach_controller" 00:22:15.451 },{ 00:22:15.451 "params": { 00:22:15.451 "name": "Nvme2", 00:22:15.451 "trtype": "tcp", 00:22:15.451 "traddr": "10.0.0.2", 00:22:15.451 "adrfam": "ipv4", 00:22:15.451 "trsvcid": "4420", 00:22:15.451 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:15.451 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:15.451 "hdgst": false, 00:22:15.451 "ddgst": false 00:22:15.451 }, 00:22:15.451 "method": "bdev_nvme_attach_controller" 00:22:15.451 },{ 00:22:15.451 "params": { 00:22:15.451 "name": "Nvme3", 00:22:15.451 "trtype": "tcp", 00:22:15.451 "traddr": "10.0.0.2", 00:22:15.451 "adrfam": "ipv4", 00:22:15.451 "trsvcid": "4420", 00:22:15.451 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:15.451 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:15.451 "hdgst": false, 00:22:15.451 "ddgst": false 00:22:15.451 }, 00:22:15.451 "method": "bdev_nvme_attach_controller" 00:22:15.451 },{ 00:22:15.451 "params": { 00:22:15.451 "name": "Nvme4", 00:22:15.451 "trtype": "tcp", 00:22:15.451 "traddr": "10.0.0.2", 00:22:15.451 "adrfam": "ipv4", 00:22:15.451 "trsvcid": "4420", 00:22:15.451 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:15.451 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:15.451 "hdgst": false, 00:22:15.451 "ddgst": false 00:22:15.451 }, 00:22:15.451 "method": "bdev_nvme_attach_controller" 00:22:15.451 },{ 00:22:15.451 "params": { 00:22:15.451 "name": "Nvme5", 00:22:15.451 "trtype": "tcp", 00:22:15.451 "traddr": "10.0.0.2", 00:22:15.451 "adrfam": "ipv4", 00:22:15.451 "trsvcid": "4420", 00:22:15.451 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:15.451 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:15.451 "hdgst": false, 00:22:15.451 "ddgst": false 00:22:15.451 }, 00:22:15.451 "method": "bdev_nvme_attach_controller" 00:22:15.451 },{ 00:22:15.451 "params": { 00:22:15.451 "name": "Nvme6", 00:22:15.451 "trtype": "tcp", 00:22:15.451 "traddr": "10.0.0.2", 00:22:15.451 "adrfam": "ipv4", 00:22:15.451 "trsvcid": "4420", 00:22:15.451 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:15.451 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:15.451 "hdgst": false, 00:22:15.451 "ddgst": false 00:22:15.451 }, 00:22:15.451 "method": "bdev_nvme_attach_controller" 00:22:15.451 },{ 00:22:15.451 "params": { 00:22:15.451 "name": "Nvme7", 00:22:15.451 "trtype": "tcp", 00:22:15.451 "traddr": "10.0.0.2", 00:22:15.451 "adrfam": "ipv4", 00:22:15.451 "trsvcid": "4420", 00:22:15.451 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:15.451 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:15.451 "hdgst": false, 00:22:15.451 "ddgst": false 00:22:15.451 }, 00:22:15.451 "method": "bdev_nvme_attach_controller" 00:22:15.451 },{ 00:22:15.451 "params": { 00:22:15.451 "name": "Nvme8", 00:22:15.451 "trtype": "tcp", 00:22:15.451 "traddr": "10.0.0.2", 00:22:15.451 "adrfam": "ipv4", 00:22:15.451 "trsvcid": "4420", 00:22:15.451 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:15.451 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:15.451 "hdgst": false, 00:22:15.451 "ddgst": false 00:22:15.451 }, 00:22:15.451 "method": "bdev_nvme_attach_controller" 00:22:15.451 },{ 00:22:15.451 "params": { 00:22:15.451 "name": "Nvme9", 00:22:15.451 "trtype": "tcp", 00:22:15.451 "traddr": "10.0.0.2", 00:22:15.451 "adrfam": "ipv4", 00:22:15.451 "trsvcid": "4420", 00:22:15.451 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:15.451 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:15.451 "hdgst": false, 00:22:15.451 "ddgst": false 00:22:15.451 }, 00:22:15.451 "method": "bdev_nvme_attach_controller" 00:22:15.451 },{ 00:22:15.451 "params": { 00:22:15.451 "name": "Nvme10", 00:22:15.451 "trtype": "tcp", 00:22:15.451 "traddr": "10.0.0.2", 00:22:15.451 "adrfam": "ipv4", 00:22:15.451 "trsvcid": "4420", 00:22:15.451 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:15.451 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:15.451 "hdgst": false, 00:22:15.451 "ddgst": false 00:22:15.451 }, 00:22:15.451 "method": "bdev_nvme_attach_controller" 00:22:15.451 }' 00:22:15.451 [2024-10-14 14:35:55.978585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.451 [2024-10-14 14:35:56.014563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.835 Running I/O for 1 seconds... 00:22:18.035 1870.00 IOPS, 116.88 MiB/s 00:22:18.035 Latency(us) 00:22:18.035 [2024-10-14T12:35:58.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.035 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.035 Verification LBA range: start 0x0 length 0x400 00:22:18.035 Nvme1n1 : 1.13 227.13 14.20 0.00 0.00 278552.11 21080.75 249910.61 00:22:18.035 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.035 Verification LBA range: start 0x0 length 0x400 00:22:18.035 Nvme2n1 : 1.05 242.85 15.18 0.00 0.00 256028.16 28835.84 235929.60 00:22:18.035 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.035 Verification LBA range: start 0x0 length 0x400 00:22:18.035 Nvme3n1 : 1.05 244.58 15.29 0.00 0.00 249391.15 19005.44 255153.49 00:22:18.035 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.035 Verification LBA range: start 0x0 length 0x400 00:22:18.035 Nvme4n1 : 1.12 228.46 14.28 0.00 0.00 263162.67 16165.55 253405.87 00:22:18.035 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.035 Verification LBA range: start 0x0 length 0x400 00:22:18.035 Nvme5n1 : 1.11 235.04 14.69 0.00 0.00 244881.27 7045.12 242920.11 00:22:18.035 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.035 Verification LBA range: start 0x0 length 0x400 00:22:18.035 Nvme6n1 : 1.13 230.31 14.39 0.00 0.00 250614.81 6471.68 253405.87 00:22:18.035 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.035 Verification LBA range: start 0x0 length 0x400 00:22:18.035 Nvme7n1 : 1.17 274.36 17.15 0.00 0.00 208304.38 13544.11 255153.49 00:22:18.035 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.035 Verification LBA range: start 0x0 length 0x400 00:22:18.035 Nvme8n1 : 1.16 279.58 17.47 0.00 0.00 199765.97 4041.39 223696.21 00:22:18.035 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.035 Verification LBA range: start 0x0 length 0x400 00:22:18.035 Nvme9n1 : 1.16 221.34 13.83 0.00 0.00 248448.64 20971.52 274377.39 00:22:18.035 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.035 Verification LBA range: start 0x0 length 0x400 00:22:18.035 Nvme10n1 : 1.18 275.45 17.22 0.00 0.00 196218.00 1215.15 242920.11 00:22:18.035 [2024-10-14T12:35:58.762Z] =================================================================================================================== 00:22:18.035 [2024-10-14T12:35:58.762Z] Total : 2459.11 153.69 0.00 0.00 236800.02 1215.15 274377.39 00:22:18.035 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:18.035 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:18.035 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:18.035 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:18.035 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:18.035 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:18.035 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:18.035 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.035 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:18.035 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.035 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.035 rmmod nvme_tcp 00:22:18.035 rmmod nvme_fabrics 00:22:18.296 rmmod nvme_keyring 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 3455265 ']' 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 3455265 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3455265 ']' 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3455265 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3455265 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3455265' 00:22:18.296 killing process with pid 3455265 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3455265 00:22:18.296 14:35:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3455265 00:22:18.557 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:18.557 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:18.557 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:18.557 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:18.557 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:22:18.557 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:22:18.557 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:18.557 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:18.557 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:18.557 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.557 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.557 14:35:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.471 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:20.471 00:22:20.471 real 0m16.999s 00:22:20.471 user 0m34.016s 00:22:20.471 sys 0m6.940s 00:22:20.471 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.471 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:20.471 ************************************ 00:22:20.471 END TEST nvmf_shutdown_tc1 00:22:20.471 ************************************ 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:20.733 ************************************ 00:22:20.733 START TEST nvmf_shutdown_tc2 00:22:20.733 ************************************ 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:20.733 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:20.733 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:20.733 Found net devices under 0000:31:00.0: cvl_0_0 00:22:20.733 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:20.734 Found net devices under 0000:31:00.1: cvl_0_1 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:20.734 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:20.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:20.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:22:20.996 00:22:20.996 --- 10.0.0.2 ping statistics --- 00:22:20.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.996 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:20.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:20.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:22:20.996 00:22:20.996 --- 10.0.0.1 ping statistics --- 00:22:20.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.996 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3457238 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3457238 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3457238 ']' 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:20.996 14:36:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:20.996 [2024-10-14 14:36:01.720345] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:22:20.996 [2024-10-14 14:36:01.720412] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.257 [2024-10-14 14:36:01.808796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:21.257 [2024-10-14 14:36:01.845251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.257 [2024-10-14 14:36:01.845284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.257 [2024-10-14 14:36:01.845290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.257 [2024-10-14 14:36:01.845294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.257 [2024-10-14 14:36:01.845299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.257 [2024-10-14 14:36:01.846646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.257 [2024-10-14 14:36:01.846806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.257 [2024-10-14 14:36:01.846965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.257 [2024-10-14 14:36:01.846967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:21.829 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:21.829 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:21.829 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:21.829 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:21.829 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.829 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.829 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:21.829 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.829 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:21.829 [2024-10-14 14:36:02.558095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.089 14:36:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:22.089 Malloc1 00:22:22.089 [2024-10-14 14:36:02.676351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.089 Malloc2 00:22:22.089 Malloc3 00:22:22.089 Malloc4 00:22:22.089 Malloc5 00:22:22.349 Malloc6 00:22:22.349 Malloc7 00:22:22.349 Malloc8 00:22:22.349 Malloc9 00:22:22.349 Malloc10 00:22:22.349 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.349 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:22.349 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:22.349 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:22.349 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3457632 00:22:22.349 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3457632 /var/tmp/bdevperf.sock 00:22:22.349 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3457632 ']' 00:22:22.349 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.349 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:22.349 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:22.349 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.349 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:22.349 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:22.349 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:22.350 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:22:22.611 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:22:22.611 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:22.611 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:22.611 { 00:22:22.611 "params": { 00:22:22.611 "name": "Nvme$subsystem", 00:22:22.611 "trtype": "$TEST_TRANSPORT", 00:22:22.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.611 "adrfam": "ipv4", 00:22:22.611 "trsvcid": "$NVMF_PORT", 00:22:22.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.611 "hdgst": ${hdgst:-false}, 00:22:22.611 "ddgst": ${ddgst:-false} 00:22:22.611 }, 00:22:22.611 "method": "bdev_nvme_attach_controller" 00:22:22.611 } 00:22:22.611 EOF 00:22:22.611 )") 00:22:22.611 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:22.611 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:22.611 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:22.611 { 00:22:22.611 "params": { 00:22:22.611 "name": "Nvme$subsystem", 00:22:22.611 "trtype": "$TEST_TRANSPORT", 00:22:22.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.611 "adrfam": "ipv4", 00:22:22.611 "trsvcid": "$NVMF_PORT", 00:22:22.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.611 "hdgst": ${hdgst:-false}, 00:22:22.611 "ddgst": ${ddgst:-false} 00:22:22.612 }, 00:22:22.612 "method": "bdev_nvme_attach_controller" 00:22:22.612 } 00:22:22.612 EOF 00:22:22.612 )") 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:22.612 { 00:22:22.612 "params": { 00:22:22.612 "name": "Nvme$subsystem", 00:22:22.612 "trtype": "$TEST_TRANSPORT", 00:22:22.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.612 "adrfam": "ipv4", 00:22:22.612 "trsvcid": "$NVMF_PORT", 00:22:22.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.612 "hdgst": ${hdgst:-false}, 00:22:22.612 "ddgst": ${ddgst:-false} 00:22:22.612 }, 00:22:22.612 "method": "bdev_nvme_attach_controller" 00:22:22.612 } 00:22:22.612 EOF 00:22:22.612 )") 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:22.612 { 00:22:22.612 "params": { 00:22:22.612 "name": "Nvme$subsystem", 00:22:22.612 "trtype": "$TEST_TRANSPORT", 00:22:22.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.612 "adrfam": "ipv4", 00:22:22.612 "trsvcid": "$NVMF_PORT", 00:22:22.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.612 "hdgst": ${hdgst:-false}, 00:22:22.612 "ddgst": ${ddgst:-false} 00:22:22.612 }, 00:22:22.612 "method": "bdev_nvme_attach_controller" 00:22:22.612 } 00:22:22.612 EOF 00:22:22.612 )") 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:22.612 { 00:22:22.612 "params": { 00:22:22.612 "name": "Nvme$subsystem", 00:22:22.612 "trtype": "$TEST_TRANSPORT", 00:22:22.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.612 "adrfam": "ipv4", 00:22:22.612 "trsvcid": "$NVMF_PORT", 00:22:22.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.612 "hdgst": ${hdgst:-false}, 00:22:22.612 "ddgst": ${ddgst:-false} 00:22:22.612 }, 00:22:22.612 "method": "bdev_nvme_attach_controller" 00:22:22.612 } 00:22:22.612 EOF 00:22:22.612 )") 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:22.612 { 00:22:22.612 "params": { 00:22:22.612 "name": "Nvme$subsystem", 00:22:22.612 "trtype": "$TEST_TRANSPORT", 00:22:22.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.612 "adrfam": "ipv4", 00:22:22.612 "trsvcid": "$NVMF_PORT", 00:22:22.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.612 "hdgst": ${hdgst:-false}, 00:22:22.612 "ddgst": ${ddgst:-false} 00:22:22.612 }, 00:22:22.612 "method": "bdev_nvme_attach_controller" 00:22:22.612 } 00:22:22.612 EOF 00:22:22.612 )") 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:22.612 [2024-10-14 14:36:03.124315] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:22:22.612 [2024-10-14 14:36:03.124370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457632 ] 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:22.612 { 00:22:22.612 "params": { 00:22:22.612 "name": "Nvme$subsystem", 00:22:22.612 "trtype": "$TEST_TRANSPORT", 00:22:22.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.612 "adrfam": "ipv4", 00:22:22.612 "trsvcid": "$NVMF_PORT", 00:22:22.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.612 "hdgst": ${hdgst:-false}, 00:22:22.612 "ddgst": ${ddgst:-false} 00:22:22.612 }, 00:22:22.612 "method": "bdev_nvme_attach_controller" 00:22:22.612 } 00:22:22.612 EOF 00:22:22.612 )") 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:22.612 { 00:22:22.612 "params": { 00:22:22.612 "name": "Nvme$subsystem", 00:22:22.612 "trtype": "$TEST_TRANSPORT", 00:22:22.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.612 "adrfam": "ipv4", 00:22:22.612 "trsvcid": "$NVMF_PORT", 00:22:22.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.612 "hdgst": ${hdgst:-false}, 00:22:22.612 "ddgst": ${ddgst:-false} 00:22:22.612 }, 00:22:22.612 "method": "bdev_nvme_attach_controller" 00:22:22.612 } 00:22:22.612 EOF 00:22:22.612 )") 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:22.612 { 00:22:22.612 "params": { 00:22:22.612 "name": "Nvme$subsystem", 00:22:22.612 "trtype": "$TEST_TRANSPORT", 00:22:22.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.612 "adrfam": "ipv4", 00:22:22.612 "trsvcid": "$NVMF_PORT", 00:22:22.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.612 "hdgst": ${hdgst:-false}, 00:22:22.612 "ddgst": ${ddgst:-false} 00:22:22.612 }, 00:22:22.612 "method": "bdev_nvme_attach_controller" 00:22:22.612 } 00:22:22.612 EOF 00:22:22.612 )") 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:22.612 { 00:22:22.612 "params": { 00:22:22.612 "name": "Nvme$subsystem", 00:22:22.612 "trtype": "$TEST_TRANSPORT", 00:22:22.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:22.612 "adrfam": "ipv4", 00:22:22.612 "trsvcid": "$NVMF_PORT", 00:22:22.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:22.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:22.612 "hdgst": ${hdgst:-false}, 00:22:22.612 "ddgst": ${ddgst:-false} 00:22:22.612 }, 00:22:22.612 "method": "bdev_nvme_attach_controller" 00:22:22.612 } 00:22:22.612 EOF 00:22:22.612 )") 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:22:22.612 14:36:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:22.612 "params": { 00:22:22.612 "name": "Nvme1", 00:22:22.612 "trtype": "tcp", 00:22:22.612 "traddr": "10.0.0.2", 00:22:22.612 "adrfam": "ipv4", 00:22:22.612 "trsvcid": "4420", 00:22:22.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:22.612 "hdgst": false, 00:22:22.612 "ddgst": false 00:22:22.612 }, 00:22:22.612 "method": "bdev_nvme_attach_controller" 00:22:22.612 },{ 00:22:22.612 "params": { 00:22:22.612 "name": "Nvme2", 00:22:22.612 "trtype": "tcp", 00:22:22.612 "traddr": "10.0.0.2", 00:22:22.612 "adrfam": "ipv4", 00:22:22.612 "trsvcid": "4420", 00:22:22.612 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:22.612 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:22.612 "hdgst": false, 00:22:22.612 "ddgst": false 00:22:22.612 }, 00:22:22.612 "method": "bdev_nvme_attach_controller" 00:22:22.612 },{ 00:22:22.612 "params": { 00:22:22.612 "name": "Nvme3", 00:22:22.612 "trtype": "tcp", 00:22:22.612 "traddr": "10.0.0.2", 00:22:22.612 "adrfam": "ipv4", 00:22:22.612 "trsvcid": "4420", 00:22:22.612 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:22.612 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:22.612 "hdgst": false, 00:22:22.612 "ddgst": false 00:22:22.612 }, 00:22:22.612 "method": "bdev_nvme_attach_controller" 00:22:22.612 },{ 00:22:22.612 "params": { 00:22:22.612 "name": "Nvme4", 00:22:22.612 "trtype": "tcp", 00:22:22.612 "traddr": "10.0.0.2", 00:22:22.612 "adrfam": "ipv4", 00:22:22.612 "trsvcid": "4420", 00:22:22.612 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:22.612 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:22.612 "hdgst": false, 00:22:22.612 "ddgst": false 00:22:22.612 }, 00:22:22.612 "method": "bdev_nvme_attach_controller" 00:22:22.612 },{ 00:22:22.612 "params": { 00:22:22.612 "name": "Nvme5", 00:22:22.612 "trtype": "tcp", 00:22:22.612 "traddr": "10.0.0.2", 00:22:22.612 "adrfam": "ipv4", 00:22:22.612 "trsvcid": "4420", 00:22:22.612 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:22.612 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:22.612 "hdgst": false, 00:22:22.612 "ddgst": false 00:22:22.613 }, 00:22:22.613 "method": "bdev_nvme_attach_controller" 00:22:22.613 },{ 00:22:22.613 "params": { 00:22:22.613 "name": "Nvme6", 00:22:22.613 "trtype": "tcp", 00:22:22.613 "traddr": "10.0.0.2", 00:22:22.613 "adrfam": "ipv4", 00:22:22.613 "trsvcid": "4420", 00:22:22.613 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:22.613 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:22.613 "hdgst": false, 00:22:22.613 "ddgst": false 00:22:22.613 }, 00:22:22.613 "method": "bdev_nvme_attach_controller" 00:22:22.613 },{ 00:22:22.613 "params": { 00:22:22.613 "name": "Nvme7", 00:22:22.613 "trtype": "tcp", 00:22:22.613 "traddr": "10.0.0.2", 00:22:22.613 "adrfam": "ipv4", 00:22:22.613 "trsvcid": "4420", 00:22:22.613 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:22.613 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:22.613 "hdgst": false, 00:22:22.613 "ddgst": false 00:22:22.613 }, 00:22:22.613 "method": "bdev_nvme_attach_controller" 00:22:22.613 },{ 00:22:22.613 "params": { 00:22:22.613 "name": "Nvme8", 00:22:22.613 "trtype": "tcp", 00:22:22.613 "traddr": "10.0.0.2", 00:22:22.613 "adrfam": "ipv4", 00:22:22.613 "trsvcid": "4420", 00:22:22.613 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:22.613 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:22.613 "hdgst": false, 00:22:22.613 "ddgst": false 00:22:22.613 }, 00:22:22.613 "method": "bdev_nvme_attach_controller" 00:22:22.613 },{ 00:22:22.613 "params": { 00:22:22.613 "name": "Nvme9", 00:22:22.613 "trtype": "tcp", 00:22:22.613 "traddr": "10.0.0.2", 00:22:22.613 "adrfam": "ipv4", 00:22:22.613 "trsvcid": "4420", 00:22:22.613 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:22.613 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:22.613 "hdgst": false, 00:22:22.613 "ddgst": false 00:22:22.613 }, 00:22:22.613 "method": "bdev_nvme_attach_controller" 00:22:22.613 },{ 00:22:22.613 "params": { 00:22:22.613 "name": "Nvme10", 00:22:22.613 "trtype": "tcp", 00:22:22.613 "traddr": "10.0.0.2", 00:22:22.613 "adrfam": "ipv4", 00:22:22.613 "trsvcid": "4420", 00:22:22.613 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:22.613 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:22.613 "hdgst": false, 00:22:22.613 "ddgst": false 00:22:22.613 }, 00:22:22.613 "method": "bdev_nvme_attach_controller" 00:22:22.613 }' 00:22:22.613 [2024-10-14 14:36:03.186612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.613 [2024-10-14 14:36:03.222872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.528 Running I/O for 10 seconds... 00:22:24.528 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.528 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:24.528 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:24.528 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.528 14:36:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:24.528 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:24.789 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:24.789 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:24.789 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:24.789 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:24.789 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.789 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:24.789 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.789 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:24.789 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:24.789 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3457632 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3457632 ']' 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3457632 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3457632 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3457632' 00:22:25.050 killing process with pid 3457632 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3457632 00:22:25.050 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3457632 00:22:25.311 Received shutdown signal, test time was about 0.985981 seconds 00:22:25.311 00:22:25.311 Latency(us) 00:22:25.311 [2024-10-14T12:36:06.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.311 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.311 Verification LBA range: start 0x0 length 0x400 00:22:25.311 Nvme1n1 : 0.96 200.96 12.56 0.00 0.00 314843.59 18131.63 251658.24 00:22:25.311 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.311 Verification LBA range: start 0x0 length 0x400 00:22:25.311 Nvme2n1 : 0.98 261.74 16.36 0.00 0.00 236546.56 17039.36 249910.61 00:22:25.311 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.311 Verification LBA range: start 0x0 length 0x400 00:22:25.311 Nvme3n1 : 0.97 265.02 16.56 0.00 0.00 228901.33 11086.51 255153.49 00:22:25.311 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.311 Verification LBA range: start 0x0 length 0x400 00:22:25.311 Nvme4n1 : 0.99 258.86 16.18 0.00 0.00 229628.93 12943.36 249910.61 00:22:25.311 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.311 Verification LBA range: start 0x0 length 0x400 00:22:25.311 Nvme5n1 : 0.96 199.64 12.48 0.00 0.00 290907.31 21189.97 258648.75 00:22:25.311 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.311 Verification LBA range: start 0x0 length 0x400 00:22:25.311 Nvme6n1 : 0.98 260.70 16.29 0.00 0.00 218379.31 19333.12 249910.61 00:22:25.311 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.311 Verification LBA range: start 0x0 length 0x400 00:22:25.311 Nvme7n1 : 0.97 262.92 16.43 0.00 0.00 211452.37 17367.04 255153.49 00:22:25.311 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.311 Verification LBA range: start 0x0 length 0x400 00:22:25.311 Nvme8n1 : 0.95 201.92 12.62 0.00 0.00 267935.57 21189.97 253405.87 00:22:25.311 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.311 Verification LBA range: start 0x0 length 0x400 00:22:25.311 Nvme9n1 : 0.98 262.00 16.38 0.00 0.00 202325.55 24466.77 217579.52 00:22:25.311 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:25.311 Verification LBA range: start 0x0 length 0x400 00:22:25.311 Nvme10n1 : 0.97 198.16 12.39 0.00 0.00 261213.30 20097.71 272629.76 00:22:25.311 [2024-10-14T12:36:06.038Z] =================================================================================================================== 00:22:25.311 [2024-10-14T12:36:06.038Z] Total : 2371.93 148.25 0.00 0.00 242050.82 11086.51 272629.76 00:22:25.311 14:36:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:26.254 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3457238 00:22:26.254 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:26.254 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:26.254 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:26.254 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:26.254 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:26.254 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:26.254 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:26.254 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:26.254 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:26.254 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:26.254 14:36:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:26.254 rmmod nvme_tcp 00:22:26.515 rmmod nvme_fabrics 00:22:26.515 rmmod nvme_keyring 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 3457238 ']' 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 3457238 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3457238 ']' 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3457238 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3457238 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3457238' 00:22:26.515 killing process with pid 3457238 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3457238 00:22:26.515 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3457238 00:22:26.777 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:26.777 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:26.777 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:26.777 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:26.777 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:22:26.777 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:26.777 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:22:26.777 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:26.777 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:26.777 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.777 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.777 14:36:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.692 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:28.692 00:22:28.692 real 0m8.154s 00:22:28.692 user 0m25.009s 00:22:28.692 sys 0m1.284s 00:22:28.692 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:28.692 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:28.692 ************************************ 00:22:28.692 END TEST nvmf_shutdown_tc2 00:22:28.692 ************************************ 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:28.954 ************************************ 00:22:28.954 START TEST nvmf_shutdown_tc3 00:22:28.954 ************************************ 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:28.954 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:28.954 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:28.954 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:28.955 Found net devices under 0000:31:00.0: cvl_0_0 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:28.955 Found net devices under 0000:31:00.1: cvl_0_1 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.955 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:29.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:29.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:22:29.216 00:22:29.216 --- 10.0.0.2 ping statistics --- 00:22:29.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.216 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:29.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:29.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:22:29.216 00:22:29.216 --- 10.0.0.1 ping statistics --- 00:22:29.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:29.216 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:29.216 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:29.217 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:29.217 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:29.217 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:29.217 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:29.217 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:29.217 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=3459422 00:22:29.217 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 3459422 00:22:29.217 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:29.217 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3459422 ']' 00:22:29.217 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:29.217 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:29.217 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:29.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:29.217 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:29.217 14:36:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:29.478 [2024-10-14 14:36:09.956366] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:22:29.478 [2024-10-14 14:36:09.956433] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:29.478 [2024-10-14 14:36:10.044950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:29.478 [2024-10-14 14:36:10.084331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.478 [2024-10-14 14:36:10.084363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.478 [2024-10-14 14:36:10.084369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:29.478 [2024-10-14 14:36:10.084374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:29.478 [2024-10-14 14:36:10.084378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.478 [2024-10-14 14:36:10.085718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.478 [2024-10-14 14:36:10.085878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:29.478 [2024-10-14 14:36:10.086033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.478 [2024-10-14 14:36:10.086036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:30.049 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:30.049 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:30.049 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:30.049 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:30.049 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.310 [2024-10-14 14:36:10.785790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:30.310 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:30.311 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.311 14:36:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.311 Malloc1 00:22:30.311 [2024-10-14 14:36:10.901772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.311 Malloc2 00:22:30.311 Malloc3 00:22:30.311 Malloc4 00:22:30.311 Malloc5 00:22:30.571 Malloc6 00:22:30.571 Malloc7 00:22:30.571 Malloc8 00:22:30.571 Malloc9 00:22:30.571 Malloc10 00:22:30.571 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.571 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:30.571 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:30.571 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3459917 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3459917 /var/tmp/bdevperf.sock 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3459917 ']' 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:30.832 { 00:22:30.832 "params": { 00:22:30.832 "name": "Nvme$subsystem", 00:22:30.832 "trtype": "$TEST_TRANSPORT", 00:22:30.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.832 "adrfam": "ipv4", 00:22:30.832 "trsvcid": "$NVMF_PORT", 00:22:30.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.832 "hdgst": ${hdgst:-false}, 00:22:30.832 "ddgst": ${ddgst:-false} 00:22:30.832 }, 00:22:30.832 "method": "bdev_nvme_attach_controller" 00:22:30.832 } 00:22:30.832 EOF 00:22:30.832 )") 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:30.832 { 00:22:30.832 "params": { 00:22:30.832 "name": "Nvme$subsystem", 00:22:30.832 "trtype": "$TEST_TRANSPORT", 00:22:30.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.832 "adrfam": "ipv4", 00:22:30.832 "trsvcid": "$NVMF_PORT", 00:22:30.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.832 "hdgst": ${hdgst:-false}, 00:22:30.832 "ddgst": ${ddgst:-false} 00:22:30.832 }, 00:22:30.832 "method": "bdev_nvme_attach_controller" 00:22:30.832 } 00:22:30.832 EOF 00:22:30.832 )") 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:30.832 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:30.833 { 00:22:30.833 "params": { 00:22:30.833 "name": "Nvme$subsystem", 00:22:30.833 "trtype": "$TEST_TRANSPORT", 00:22:30.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.833 "adrfam": "ipv4", 00:22:30.833 "trsvcid": "$NVMF_PORT", 00:22:30.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.833 "hdgst": ${hdgst:-false}, 00:22:30.833 "ddgst": ${ddgst:-false} 00:22:30.833 }, 00:22:30.833 "method": "bdev_nvme_attach_controller" 00:22:30.833 } 00:22:30.833 EOF 00:22:30.833 )") 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:30.833 { 00:22:30.833 "params": { 00:22:30.833 "name": "Nvme$subsystem", 00:22:30.833 "trtype": "$TEST_TRANSPORT", 00:22:30.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.833 "adrfam": "ipv4", 00:22:30.833 "trsvcid": "$NVMF_PORT", 00:22:30.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.833 "hdgst": ${hdgst:-false}, 00:22:30.833 "ddgst": ${ddgst:-false} 00:22:30.833 }, 00:22:30.833 "method": "bdev_nvme_attach_controller" 00:22:30.833 } 00:22:30.833 EOF 00:22:30.833 )") 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:30.833 { 00:22:30.833 "params": { 00:22:30.833 "name": "Nvme$subsystem", 00:22:30.833 "trtype": "$TEST_TRANSPORT", 00:22:30.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.833 "adrfam": "ipv4", 00:22:30.833 "trsvcid": "$NVMF_PORT", 00:22:30.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.833 "hdgst": ${hdgst:-false}, 00:22:30.833 "ddgst": ${ddgst:-false} 00:22:30.833 }, 00:22:30.833 "method": "bdev_nvme_attach_controller" 00:22:30.833 } 00:22:30.833 EOF 00:22:30.833 )") 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:30.833 { 00:22:30.833 "params": { 00:22:30.833 "name": "Nvme$subsystem", 00:22:30.833 "trtype": "$TEST_TRANSPORT", 00:22:30.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.833 "adrfam": "ipv4", 00:22:30.833 "trsvcid": "$NVMF_PORT", 00:22:30.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.833 "hdgst": ${hdgst:-false}, 00:22:30.833 "ddgst": ${ddgst:-false} 00:22:30.833 }, 00:22:30.833 "method": "bdev_nvme_attach_controller" 00:22:30.833 } 00:22:30.833 EOF 00:22:30.833 )") 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:30.833 { 00:22:30.833 "params": { 00:22:30.833 "name": "Nvme$subsystem", 00:22:30.833 "trtype": "$TEST_TRANSPORT", 00:22:30.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.833 "adrfam": "ipv4", 00:22:30.833 "trsvcid": "$NVMF_PORT", 00:22:30.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.833 "hdgst": ${hdgst:-false}, 00:22:30.833 "ddgst": ${ddgst:-false} 00:22:30.833 }, 00:22:30.833 "method": "bdev_nvme_attach_controller" 00:22:30.833 } 00:22:30.833 EOF 00:22:30.833 )") 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:30.833 { 00:22:30.833 "params": { 00:22:30.833 "name": "Nvme$subsystem", 00:22:30.833 "trtype": "$TEST_TRANSPORT", 00:22:30.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.833 "adrfam": "ipv4", 00:22:30.833 "trsvcid": "$NVMF_PORT", 00:22:30.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.833 "hdgst": ${hdgst:-false}, 00:22:30.833 "ddgst": ${ddgst:-false} 00:22:30.833 }, 00:22:30.833 "method": "bdev_nvme_attach_controller" 00:22:30.833 } 00:22:30.833 EOF 00:22:30.833 )") 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:30.833 [2024-10-14 14:36:11.363281] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:22:30.833 [2024-10-14 14:36:11.363337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459917 ] 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:30.833 { 00:22:30.833 "params": { 00:22:30.833 "name": "Nvme$subsystem", 00:22:30.833 "trtype": "$TEST_TRANSPORT", 00:22:30.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.833 "adrfam": "ipv4", 00:22:30.833 "trsvcid": "$NVMF_PORT", 00:22:30.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.833 "hdgst": ${hdgst:-false}, 00:22:30.833 "ddgst": ${ddgst:-false} 00:22:30.833 }, 00:22:30.833 "method": "bdev_nvme_attach_controller" 00:22:30.833 } 00:22:30.833 EOF 00:22:30.833 )") 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:30.833 { 00:22:30.833 "params": { 00:22:30.833 "name": "Nvme$subsystem", 00:22:30.833 "trtype": "$TEST_TRANSPORT", 00:22:30.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.833 "adrfam": "ipv4", 00:22:30.833 "trsvcid": "$NVMF_PORT", 00:22:30.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.833 "hdgst": ${hdgst:-false}, 00:22:30.833 "ddgst": ${ddgst:-false} 00:22:30.833 }, 00:22:30.833 "method": "bdev_nvme_attach_controller" 00:22:30.833 } 00:22:30.833 EOF 00:22:30.833 )") 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:22:30.833 14:36:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:30.833 "params": { 00:22:30.833 "name": "Nvme1", 00:22:30.833 "trtype": "tcp", 00:22:30.833 "traddr": "10.0.0.2", 00:22:30.833 "adrfam": "ipv4", 00:22:30.833 "trsvcid": "4420", 00:22:30.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.833 "hdgst": false, 00:22:30.833 "ddgst": false 00:22:30.833 }, 00:22:30.833 "method": "bdev_nvme_attach_controller" 00:22:30.833 },{ 00:22:30.833 "params": { 00:22:30.833 "name": "Nvme2", 00:22:30.833 "trtype": "tcp", 00:22:30.833 "traddr": "10.0.0.2", 00:22:30.833 "adrfam": "ipv4", 00:22:30.833 "trsvcid": "4420", 00:22:30.833 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:30.833 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:30.833 "hdgst": false, 00:22:30.833 "ddgst": false 00:22:30.833 }, 00:22:30.833 "method": "bdev_nvme_attach_controller" 00:22:30.833 },{ 00:22:30.833 "params": { 00:22:30.833 "name": "Nvme3", 00:22:30.833 "trtype": "tcp", 00:22:30.833 "traddr": "10.0.0.2", 00:22:30.833 "adrfam": "ipv4", 00:22:30.833 "trsvcid": "4420", 00:22:30.833 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:30.833 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:30.833 "hdgst": false, 00:22:30.833 "ddgst": false 00:22:30.833 }, 00:22:30.833 "method": "bdev_nvme_attach_controller" 00:22:30.833 },{ 00:22:30.833 "params": { 00:22:30.833 "name": "Nvme4", 00:22:30.833 "trtype": "tcp", 00:22:30.833 "traddr": "10.0.0.2", 00:22:30.833 "adrfam": "ipv4", 00:22:30.833 "trsvcid": "4420", 00:22:30.833 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:30.833 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:30.833 "hdgst": false, 00:22:30.833 "ddgst": false 00:22:30.833 }, 00:22:30.833 "method": "bdev_nvme_attach_controller" 00:22:30.833 },{ 00:22:30.833 "params": { 00:22:30.833 "name": "Nvme5", 00:22:30.833 "trtype": "tcp", 00:22:30.833 "traddr": "10.0.0.2", 00:22:30.833 "adrfam": "ipv4", 00:22:30.833 "trsvcid": "4420", 00:22:30.833 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:30.833 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:30.833 "hdgst": false, 00:22:30.833 "ddgst": false 00:22:30.833 }, 00:22:30.833 "method": "bdev_nvme_attach_controller" 00:22:30.833 },{ 00:22:30.834 "params": { 00:22:30.834 "name": "Nvme6", 00:22:30.834 "trtype": "tcp", 00:22:30.834 "traddr": "10.0.0.2", 00:22:30.834 "adrfam": "ipv4", 00:22:30.834 "trsvcid": "4420", 00:22:30.834 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:30.834 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:30.834 "hdgst": false, 00:22:30.834 "ddgst": false 00:22:30.834 }, 00:22:30.834 "method": "bdev_nvme_attach_controller" 00:22:30.834 },{ 00:22:30.834 "params": { 00:22:30.834 "name": "Nvme7", 00:22:30.834 "trtype": "tcp", 00:22:30.834 "traddr": "10.0.0.2", 00:22:30.834 "adrfam": "ipv4", 00:22:30.834 "trsvcid": "4420", 00:22:30.834 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:30.834 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:30.834 "hdgst": false, 00:22:30.834 "ddgst": false 00:22:30.834 }, 00:22:30.834 "method": "bdev_nvme_attach_controller" 00:22:30.834 },{ 00:22:30.834 "params": { 00:22:30.834 "name": "Nvme8", 00:22:30.834 "trtype": "tcp", 00:22:30.834 "traddr": "10.0.0.2", 00:22:30.834 "adrfam": "ipv4", 00:22:30.834 "trsvcid": "4420", 00:22:30.834 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:30.834 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:30.834 "hdgst": false, 00:22:30.834 "ddgst": false 00:22:30.834 }, 00:22:30.834 "method": "bdev_nvme_attach_controller" 00:22:30.834 },{ 00:22:30.834 "params": { 00:22:30.834 "name": "Nvme9", 00:22:30.834 "trtype": "tcp", 00:22:30.834 "traddr": "10.0.0.2", 00:22:30.834 "adrfam": "ipv4", 00:22:30.834 "trsvcid": "4420", 00:22:30.834 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:30.834 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:30.834 "hdgst": false, 00:22:30.834 "ddgst": false 00:22:30.834 }, 00:22:30.834 "method": "bdev_nvme_attach_controller" 00:22:30.834 },{ 00:22:30.834 "params": { 00:22:30.834 "name": "Nvme10", 00:22:30.834 "trtype": "tcp", 00:22:30.834 "traddr": "10.0.0.2", 00:22:30.834 "adrfam": "ipv4", 00:22:30.834 "trsvcid": "4420", 00:22:30.834 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:30.834 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:30.834 "hdgst": false, 00:22:30.834 "ddgst": false 00:22:30.834 }, 00:22:30.834 "method": "bdev_nvme_attach_controller" 00:22:30.834 }' 00:22:30.834 [2024-10-14 14:36:11.426469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.834 [2024-10-14 14:36:11.462633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.745 Running I/O for 10 seconds... 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:32.745 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:33.006 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:33.006 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:33.006 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:33.006 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:33.006 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.006 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.006 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.006 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:33.006 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:33.006 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3459422 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3459422 ']' 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3459422 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3459422 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3459422' 00:22:33.275 killing process with pid 3459422 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3459422 00:22:33.275 14:36:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3459422 00:22:33.275 [2024-10-14 14:36:13.949074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.275 [2024-10-14 14:36:13.949239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.949423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40950 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.276 [2024-10-14 14:36:13.950997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.951002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.951008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.951013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.951017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.951022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.951026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.951031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.951036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.951041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.951045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.951050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.951054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.951059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43390 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.277 [2024-10-14 14:36:13.952424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.952428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d40e20 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d417e0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d41cb0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.954960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d41cb0 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.955392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42030 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.955402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42030 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.955407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42030 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.278 [2024-10-14 14:36:13.956365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.956577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42500 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.279 [2024-10-14 14:36:13.957386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d429f0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.957994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.280 [2024-10-14 14:36:13.958264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.958268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.958274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.967487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.967506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.967513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.967518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.967524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.967529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42ec0 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.971377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8fb0 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.971504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8f030 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.971588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe76e0 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.971685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5610 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.971772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeca50 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.971863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb30e0 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.971954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.971986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.971994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.972002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.972010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.972017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85ae0 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.972039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.972047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.972057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.972072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.972081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.972089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.972097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.972107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.972114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8c90 is same with the state(6) to be set 00:22:33.281 [2024-10-14 14:36:13.972138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.972147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.972157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.281 [2024-10-14 14:36:13.972166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.281 [2024-10-14 14:36:13.972174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.282 [2024-10-14 14:36:13.972181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.282 [2024-10-14 14:36:13.972197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe78c0 is same with the state(6) to be set 00:22:33.282 [2024-10-14 14:36:13.972229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.282 [2024-10-14 14:36:13.972238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.282 [2024-10-14 14:36:13.972258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.282 [2024-10-14 14:36:13.972275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.282 [2024-10-14 14:36:13.972291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb842c0 is same with the state(6) to be set 00:22:33.282 [2024-10-14 14:36:13.972427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.972991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.972998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.973007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.973015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.973024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.282 [2024-10-14 14:36:13.973031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.282 [2024-10-14 14:36:13.973040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1033c90 is same with the state(6) to be set 00:22:33.283 [2024-10-14 14:36:13.973579] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1033c90 was disconnected and freed. reset controller. 00:22:33.283 [2024-10-14 14:36:13.973662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.283 [2024-10-14 14:36:13.973861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.283 [2024-10-14 14:36:13.973870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.973877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.973888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.973895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.973904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.973912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.973921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.973929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.973938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.973946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.973957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.973965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.973974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.973981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.973990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.973998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.284 [2024-10-14 14:36:13.974493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.284 [2024-10-14 14:36:13.974500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974809] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10366a0 was disconnected and freed. reset controller. 00:22:33.285 [2024-10-14 14:36:13.974870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.974990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.974998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.975013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.975020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.285 [2024-10-14 14:36:13.981413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.285 [2024-10-14 14:36:13.981422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.981987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.981996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.982004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.982015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.982022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.982031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.982039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.982048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.286 [2024-10-14 14:36:13.982055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.286 [2024-10-14 14:36:13.982138] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1037c20 was disconnected and freed. reset controller. 00:22:33.286 [2024-10-14 14:36:13.982411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb8fb0 (9): Bad file descriptor 00:22:33.286 [2024-10-14 14:36:13.982440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8f030 (9): Bad file descriptor 00:22:33.286 [2024-10-14 14:36:13.982455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe76e0 (9): Bad file descriptor 00:22:33.286 [2024-10-14 14:36:13.982470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5610 (9): Bad file descriptor 00:22:33.286 [2024-10-14 14:36:13.982483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfeca50 (9): Bad file descriptor 00:22:33.286 [2024-10-14 14:36:13.982496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb30e0 (9): Bad file descriptor 00:22:33.286 [2024-10-14 14:36:13.982509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb85ae0 (9): Bad file descriptor 00:22:33.286 [2024-10-14 14:36:13.982528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb8c90 (9): Bad file descriptor 00:22:33.286 [2024-10-14 14:36:13.982543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe78c0 (9): Bad file descriptor 00:22:33.287 [2024-10-14 14:36:13.982557] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb842c0 (9): Bad file descriptor 00:22:33.287 [2024-10-14 14:36:13.986420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:33.287 [2024-10-14 14:36:13.986798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:33.287 [2024-10-14 14:36:13.986823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:33.287 [2024-10-14 14:36:13.987321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.287 [2024-10-14 14:36:13.987361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb842c0 with addr=10.0.0.2, port=4420 00:22:33.287 [2024-10-14 14:36:13.987376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb842c0 is same with the state(6) to be set 00:22:33.287 [2024-10-14 14:36:13.987435] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:33.287 [2024-10-14 14:36:13.987791] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:33.287 [2024-10-14 14:36:13.988359] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:33.287 [2024-10-14 14:36:13.988398] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:33.287 [2024-10-14 14:36:13.988434] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:33.287 [2024-10-14 14:36:13.988474] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:33.287 [2024-10-14 14:36:13.988824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.287 [2024-10-14 14:36:13.988845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb8fb0 with addr=10.0.0.2, port=4420 00:22:33.287 [2024-10-14 14:36:13.988853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8fb0 is same with the state(6) to be set 00:22:33.287 [2024-10-14 14:36:13.989293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.287 [2024-10-14 14:36:13.989331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa5610 with addr=10.0.0.2, port=4420 00:22:33.287 [2024-10-14 14:36:13.989343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5610 is same with the state(6) to be set 00:22:33.287 [2024-10-14 14:36:13.989359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb842c0 (9): Bad file descriptor 00:22:33.287 [2024-10-14 14:36:13.989428] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:33.287 [2024-10-14 14:36:13.989512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb8fb0 (9): Bad file descriptor 00:22:33.287 [2024-10-14 14:36:13.989524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5610 (9): Bad file descriptor 00:22:33.287 [2024-10-14 14:36:13.989533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:33.287 [2024-10-14 14:36:13.989540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:33.287 [2024-10-14 14:36:13.989550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:33.287 [2024-10-14 14:36:13.989616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.287 [2024-10-14 14:36:13.989626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:33.287 [2024-10-14 14:36:13.989632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:33.287 [2024-10-14 14:36:13.989640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:33.287 [2024-10-14 14:36:13.989651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:33.287 [2024-10-14 14:36:13.989658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:33.287 [2024-10-14 14:36:13.989665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:33.287 [2024-10-14 14:36:13.989707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.287 [2024-10-14 14:36:13.989714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.287 [2024-10-14 14:36:13.992547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.992987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.992994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.993004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.993011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.287 [2024-10-14 14:36:13.993021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.287 [2024-10-14 14:36:13.993028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.288 [2024-10-14 14:36:13.993659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.288 [2024-10-14 14:36:13.993669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.993676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.993685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd93040 is same with the state(6) to be set 00:22:33.289 [2024-10-14 14:36:13.994982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.994997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.289 [2024-10-14 14:36:13.995697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.289 [2024-10-14 14:36:13.995704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.995985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.995993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.996004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.996011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.996020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.996028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.996037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.996045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.996054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.996065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.996075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.996082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.996092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.290 [2024-10-14 14:36:13.996099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.290 [2024-10-14 14:36:13.996107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd942a0 is same with the state(6) to be set 00:22:33.556 [2024-10-14 14:36:13.997386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.556 [2024-10-14 14:36:13.997400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.556 [2024-10-14 14:36:13.997415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.556 [2024-10-14 14:36:13.997426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.556 [2024-10-14 14:36:13.997440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.556 [2024-10-14 14:36:13.997450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.556 [2024-10-14 14:36:13.997460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.556 [2024-10-14 14:36:13.997467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.556 [2024-10-14 14:36:13.997477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.556 [2024-10-14 14:36:13.997484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.556 [2024-10-14 14:36:13.997494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.556 [2024-10-14 14:36:13.997502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.556 [2024-10-14 14:36:13.997511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.556 [2024-10-14 14:36:13.997519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.556 [2024-10-14 14:36:13.997528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.556 [2024-10-14 14:36:13.997536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.556 [2024-10-14 14:36:13.997546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.556 [2024-10-14 14:36:13.997553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.556 [2024-10-14 14:36:13.997563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.556 [2024-10-14 14:36:13.997570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.556 [2024-10-14 14:36:13.997580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.997987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.997995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.557 [2024-10-14 14:36:13.998295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.557 [2024-10-14 14:36:13.998305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.998312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.998321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.998329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.998338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.998345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.998355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.998362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.998371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.998379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.998388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.998395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.998405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.998412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.998422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.998429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.998440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.998447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.998456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.998464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.998473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.998481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.998492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.998499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.998507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035160 is same with the state(6) to be set 00:22:33.558 [2024-10-14 14:36:13.999778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.999792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.999806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.999815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.999827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.999837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.999849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.999860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.999873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.999881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.999891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.999898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.999909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.999917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.999926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.999934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.999944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.999952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.999961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.999969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.999978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:13.999986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:13.999999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.558 [2024-10-14 14:36:14.000319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.558 [2024-10-14 14:36:14.000327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.000912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.000920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10391a0 is same with the state(6) to be set 00:22:33.559 [2024-10-14 14:36:14.002199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.002213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.002226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.002236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.002248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.002257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.002268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.002277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.002287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.002295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.002304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.002312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.002321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.559 [2024-10-14 14:36:14.002329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.559 [2024-10-14 14:36:14.002338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.560 [2024-10-14 14:36:14.002949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.560 [2024-10-14 14:36:14.002958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.002966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.002975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.002983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.002992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.002999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.003310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.003319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a720 is same with the state(6) to be set 00:22:33.561 [2024-10-14 14:36:14.004589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.561 [2024-10-14 14:36:14.004960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.561 [2024-10-14 14:36:14.004969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.004982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.004990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.004999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.562 [2024-10-14 14:36:14.005697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.562 [2024-10-14 14:36:14.005705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.005714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.005721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.005731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.005738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.005746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90720 is same with the state(6) to be set 00:22:33.563 [2024-10-14 14:36:14.007022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.563 [2024-10-14 14:36:14.007690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.563 [2024-10-14 14:36:14.007700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.007991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.007999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.008008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.008016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.008025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.008032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.008042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.008049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.008059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.008070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.008079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.008087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.008096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.008104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.008113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.564 [2024-10-14 14:36:14.008120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.564 [2024-10-14 14:36:14.008128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91ca0 is same with the state(6) to be set 00:22:33.564 [2024-10-14 14:36:14.010048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:33.564 [2024-10-14 14:36:14.010080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:33.564 [2024-10-14 14:36:14.010090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:33.564 [2024-10-14 14:36:14.010100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:33.564 [2024-10-14 14:36:14.010180] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.564 [2024-10-14 14:36:14.010197] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.564 [2024-10-14 14:36:14.010210] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.564 [2024-10-14 14:36:14.010296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:33.564 [2024-10-14 14:36:14.010308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:33.564 task offset: 24576 on job bdev=Nvme3n1 fails 00:22:33.564 00:22:33.564 Latency(us) 00:22:33.564 [2024-10-14T12:36:14.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.564 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.564 Job: Nvme1n1 ended in about 0.98 seconds with error 00:22:33.564 Verification LBA range: start 0x0 length 0x400 00:22:33.564 Nvme1n1 : 0.98 195.13 12.20 65.04 0.00 243301.33 17913.17 237677.23 00:22:33.564 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.564 Job: Nvme2n1 ended in about 0.99 seconds with error 00:22:33.564 Verification LBA range: start 0x0 length 0x400 00:22:33.564 Nvme2n1 : 0.99 129.77 8.11 64.89 0.00 318755.56 25886.72 249910.61 00:22:33.564 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.564 Job: Nvme3n1 ended in about 0.97 seconds with error 00:22:33.564 Verification LBA range: start 0x0 length 0x400 00:22:33.564 Nvme3n1 : 0.97 197.35 12.33 65.78 0.00 230786.99 12342.61 255153.49 00:22:33.564 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.564 Job: Nvme4n1 ended in about 0.99 seconds with error 00:22:33.564 Verification LBA range: start 0x0 length 0x400 00:22:33.564 Nvme4n1 : 0.99 194.19 12.14 64.73 0.00 229888.43 19114.67 251658.24 00:22:33.564 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.564 Job: Nvme5n1 ended in about 0.97 seconds with error 00:22:33.564 Verification LBA range: start 0x0 length 0x400 00:22:33.564 Nvme5n1 : 0.97 197.10 12.32 65.70 0.00 221391.89 13107.20 251658.24 00:22:33.564 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.564 Job: Nvme6n1 ended in about 0.98 seconds with error 00:22:33.564 Verification LBA range: start 0x0 length 0x400 00:22:33.564 Nvme6n1 : 0.98 196.86 12.30 65.62 0.00 216836.27 14964.05 249910.61 00:22:33.564 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.564 Job: Nvme7n1 ended in about 0.99 seconds with error 00:22:33.564 Verification LBA range: start 0x0 length 0x400 00:22:33.564 Nvme7n1 : 0.99 193.72 12.11 64.57 0.00 216004.27 41943.04 205346.13 00:22:33.564 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.564 Job: Nvme8n1 ended in about 0.99 seconds with error 00:22:33.564 Verification LBA range: start 0x0 length 0x400 00:22:33.564 Nvme8n1 : 0.99 193.25 12.08 64.42 0.00 211764.69 19879.25 230686.72 00:22:33.564 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.564 Job: Nvme9n1 ended in about 1.00 seconds with error 00:22:33.564 Verification LBA range: start 0x0 length 0x400 00:22:33.564 Nvme9n1 : 1.00 128.52 8.03 64.26 0.00 276754.20 17476.27 251658.24 00:22:33.564 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:33.564 Job: Nvme10n1 ended in about 1.00 seconds with error 00:22:33.564 Verification LBA range: start 0x0 length 0x400 00:22:33.564 Nvme10n1 : 1.00 128.21 8.01 64.11 0.00 271249.64 17913.17 272629.76 00:22:33.564 [2024-10-14T12:36:14.291Z] =================================================================================================================== 00:22:33.564 [2024-10-14T12:36:14.291Z] Total : 1754.11 109.63 649.12 0.00 240004.69 12342.61 272629.76 00:22:33.564 [2024-10-14 14:36:14.035152] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:33.565 [2024-10-14 14:36:14.035183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:33.565 [2024-10-14 14:36:14.035620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.565 [2024-10-14 14:36:14.035639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb8f030 with addr=10.0.0.2, port=4420 00:22:33.565 [2024-10-14 14:36:14.035648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb8f030 is same with the state(6) to be set 00:22:33.565 [2024-10-14 14:36:14.035978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.565 [2024-10-14 14:36:14.035993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb85ae0 with addr=10.0.0.2, port=4420 00:22:33.565 [2024-10-14 14:36:14.036001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85ae0 is same with the state(6) to be set 00:22:33.565 [2024-10-14 14:36:14.036365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.565 [2024-10-14 14:36:14.036377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb8c90 with addr=10.0.0.2, port=4420 00:22:33.565 [2024-10-14 14:36:14.036385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8c90 is same with the state(6) to be set 00:22:33.565 [2024-10-14 14:36:14.036714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.565 [2024-10-14 14:36:14.036723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe78c0 with addr=10.0.0.2, port=4420 00:22:33.565 [2024-10-14 14:36:14.036730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe78c0 is same with the state(6) to be set 00:22:33.565 [2024-10-14 14:36:14.038590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:33.565 [2024-10-14 14:36:14.038605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:33.565 [2024-10-14 14:36:14.038839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.565 [2024-10-14 14:36:14.038853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb30e0 with addr=10.0.0.2, port=4420 00:22:33.565 [2024-10-14 14:36:14.038861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb30e0 is same with the state(6) to be set 00:22:33.565 [2024-10-14 14:36:14.039183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.565 [2024-10-14 14:36:14.039194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfeca50 with addr=10.0.0.2, port=4420 00:22:33.565 [2024-10-14 14:36:14.039201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeca50 is same with the state(6) to be set 00:22:33.565 [2024-10-14 14:36:14.039528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.565 [2024-10-14 14:36:14.039538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe76e0 with addr=10.0.0.2, port=4420 00:22:33.565 [2024-10-14 14:36:14.039546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe76e0 is same with the state(6) to be set 00:22:33.565 [2024-10-14 14:36:14.039558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb8f030 (9): Bad file descriptor 00:22:33.565 [2024-10-14 14:36:14.039569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb85ae0 (9): Bad file descriptor 00:22:33.565 [2024-10-14 14:36:14.039578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb8c90 (9): Bad file descriptor 00:22:33.565 [2024-10-14 14:36:14.039587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe78c0 (9): Bad file descriptor 00:22:33.565 [2024-10-14 14:36:14.039613] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.565 [2024-10-14 14:36:14.039629] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.565 [2024-10-14 14:36:14.039641] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.565 [2024-10-14 14:36:14.039653] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.565 [2024-10-14 14:36:14.039664] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:33.565 [2024-10-14 14:36:14.039970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:33.565 [2024-10-14 14:36:14.040411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.565 [2024-10-14 14:36:14.040424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb842c0 with addr=10.0.0.2, port=4420 00:22:33.565 [2024-10-14 14:36:14.040432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb842c0 is same with the state(6) to be set 00:22:33.565 [2024-10-14 14:36:14.040750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.565 [2024-10-14 14:36:14.040761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa5610 with addr=10.0.0.2, port=4420 00:22:33.565 [2024-10-14 14:36:14.040768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5610 is same with the state(6) to be set 00:22:33.565 [2024-10-14 14:36:14.040778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb30e0 (9): Bad file descriptor 00:22:33.565 [2024-10-14 14:36:14.040788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfeca50 (9): Bad file descriptor 00:22:33.565 [2024-10-14 14:36:14.040797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe76e0 (9): Bad file descriptor 00:22:33.565 [2024-10-14 14:36:14.040805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:33.565 [2024-10-14 14:36:14.040812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:33.565 [2024-10-14 14:36:14.040821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:33.565 [2024-10-14 14:36:14.040833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:33.565 [2024-10-14 14:36:14.040839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:33.565 [2024-10-14 14:36:14.040846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:33.565 [2024-10-14 14:36:14.040856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:33.565 [2024-10-14 14:36:14.040863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:33.565 [2024-10-14 14:36:14.040870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:33.565 [2024-10-14 14:36:14.040880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:33.565 [2024-10-14 14:36:14.040887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:33.565 [2024-10-14 14:36:14.040893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:33.565 [2024-10-14 14:36:14.040972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.565 [2024-10-14 14:36:14.040980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.565 [2024-10-14 14:36:14.040986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.565 [2024-10-14 14:36:14.040993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.565 [2024-10-14 14:36:14.041181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.565 [2024-10-14 14:36:14.041192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfb8fb0 with addr=10.0.0.2, port=4420 00:22:33.565 [2024-10-14 14:36:14.041201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8fb0 is same with the state(6) to be set 00:22:33.565 [2024-10-14 14:36:14.041210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb842c0 (9): Bad file descriptor 00:22:33.565 [2024-10-14 14:36:14.041220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5610 (9): Bad file descriptor 00:22:33.565 [2024-10-14 14:36:14.041228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:33.565 [2024-10-14 14:36:14.041237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:33.565 [2024-10-14 14:36:14.041244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:33.565 [2024-10-14 14:36:14.041255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:33.565 [2024-10-14 14:36:14.041261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:33.565 [2024-10-14 14:36:14.041268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:33.565 [2024-10-14 14:36:14.041278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:33.565 [2024-10-14 14:36:14.041284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:33.565 [2024-10-14 14:36:14.041290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:33.565 [2024-10-14 14:36:14.041318] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.565 [2024-10-14 14:36:14.041327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.565 [2024-10-14 14:36:14.041333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.565 [2024-10-14 14:36:14.041341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb8fb0 (9): Bad file descriptor 00:22:33.565 [2024-10-14 14:36:14.041349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:33.565 [2024-10-14 14:36:14.041355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:33.565 [2024-10-14 14:36:14.041362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:33.565 [2024-10-14 14:36:14.041372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:33.565 [2024-10-14 14:36:14.041378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:33.565 [2024-10-14 14:36:14.041385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:33.565 [2024-10-14 14:36:14.041413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.565 [2024-10-14 14:36:14.041421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.565 [2024-10-14 14:36:14.041427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:33.566 [2024-10-14 14:36:14.041434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:33.566 [2024-10-14 14:36:14.041441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:33.566 [2024-10-14 14:36:14.041467] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:33.566 14:36:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3459917 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3459917 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3459917 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.507 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.507 rmmod nvme_tcp 00:22:34.767 rmmod nvme_fabrics 00:22:34.767 rmmod nvme_keyring 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 3459422 ']' 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 3459422 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3459422 ']' 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3459422 00:22:34.767 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3459422) - No such process 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3459422 is not found' 00:22:34.767 Process with pid 3459422 is not found 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.767 14:36:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.676 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:36.676 00:22:36.676 real 0m7.861s 00:22:36.676 user 0m19.434s 00:22:36.676 sys 0m1.278s 00:22:36.676 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:36.676 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:36.676 ************************************ 00:22:36.676 END TEST nvmf_shutdown_tc3 00:22:36.676 ************************************ 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:36.938 ************************************ 00:22:36.938 START TEST nvmf_shutdown_tc4 00:22:36.938 ************************************ 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:36.938 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:36.938 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:36.938 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:36.939 Found net devices under 0000:31:00.0: cvl_0_0 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:36.939 Found net devices under 0000:31:00.1: cvl_0_1 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:36.939 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:37.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:22:37.200 00:22:37.200 --- 10.0.0.2 ping statistics --- 00:22:37.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.200 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:22:37.200 00:22:37.200 --- 10.0.0.1 ping statistics --- 00:22:37.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.200 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=3461181 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 3461181 00:22:37.200 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 3461181 ']' 00:22:37.201 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:37.201 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.201 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:37.201 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.201 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:37.201 14:36:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:37.201 [2024-10-14 14:36:17.886803] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:22:37.201 [2024-10-14 14:36:17.886865] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.461 [2024-10-14 14:36:17.976026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:37.461 [2024-10-14 14:36:18.011625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.461 [2024-10-14 14:36:18.011658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.461 [2024-10-14 14:36:18.011664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.461 [2024-10-14 14:36:18.011669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.461 [2024-10-14 14:36:18.011673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.461 [2024-10-14 14:36:18.013026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.461 [2024-10-14 14:36:18.013190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:37.461 [2024-10-14 14:36:18.013461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.461 [2024-10-14 14:36:18.013461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:38.031 [2024-10-14 14:36:18.740139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.031 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.292 14:36:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:38.292 Malloc1 00:22:38.292 [2024-10-14 14:36:18.849805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.292 Malloc2 00:22:38.292 Malloc3 00:22:38.292 Malloc4 00:22:38.292 Malloc5 00:22:38.292 Malloc6 00:22:38.553 Malloc7 00:22:38.553 Malloc8 00:22:38.553 Malloc9 00:22:38.553 Malloc10 00:22:38.553 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.553 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:38.553 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.553 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:38.553 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3461445 00:22:38.553 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:38.553 14:36:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:38.813 [2024-10-14 14:36:19.307177] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:44.109 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.109 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3461181 00:22:44.109 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3461181 ']' 00:22:44.109 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3461181 00:22:44.109 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:22:44.109 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.109 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3461181 00:22:44.109 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:44.109 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:44.109 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3461181' 00:22:44.109 killing process with pid 3461181 00:22:44.109 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 3461181 00:22:44.109 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 3461181 00:22:44.109 [2024-10-14 14:36:24.333344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2305f60 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2305f60 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2305f60 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2305f60 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2305f60 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2305f60 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2305f60 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2305f60 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306430 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306430 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306430 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306430 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306430 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306430 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306430 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.333813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306430 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.334166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306900 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.334189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306900 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.334195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306900 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.334201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306900 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.334206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306900 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.334211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306900 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.334217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306900 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.334222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2306900 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.335807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23055c0 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.335825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23055c0 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.335831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23055c0 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.335836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23055c0 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.335841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23055c0 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.335849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23055c0 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.341452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4cf0 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.341471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4cf0 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.341477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4cf0 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.341482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4cf0 is same with the state(6) to be set 00:22:44.109 [2024-10-14 14:36:24.341487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c4cf0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.341734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23324d0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.341752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23324d0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.341758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23324d0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.341763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23324d0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.341769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23324d0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.341774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23324d0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.341779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23324d0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.341784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23324d0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.341789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23324d0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.342082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23329a0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.342100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23329a0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.342106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23329a0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.342111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23329a0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.342116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23329a0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.342121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23329a0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.342830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333360 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.342856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333360 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.342861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333360 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.342866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333360 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.343166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333850 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.343181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333850 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.343187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333850 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.343191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333850 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.343563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333d40 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.343578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333d40 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.343583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333d40 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.343588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333d40 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.343838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2332e90 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.343860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2332e90 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.343866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2332e90 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23346e0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23346e0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23346e0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23346e0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23346e0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22819b0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22819b0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22819b0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2281d30 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2281d30 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2281d30 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2281d30 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2281d30 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2281d30 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2281d30 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2281d30 is same with the state(6) to be set 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 starting I/O failed: -6 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 starting I/O failed: -6 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 [2024-10-14 14:36:24.344871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334210 is same with the state(6) to be set 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 [2024-10-14 14:36:24.344882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334210 is same with the state(6) to be set 00:22:44.110 starting I/O failed: -6 00:22:44.110 [2024-10-14 14:36:24.344887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334210 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334210 is same with Write completed with error (sct=0, sc=8) 00:22:44.110 the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334210 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334210 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.344910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334210 is same with the state(6) to be set 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 [2024-10-14 14:36:24.344914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334210 is same with the state(6) to be set 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 starting I/O failed: -6 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 starting I/O failed: -6 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 starting I/O failed: -6 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 [2024-10-14 14:36:24.345134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ca50 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.345147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ca50 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.345152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ca50 is same with the state(6) to be set 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 [2024-10-14 14:36:24.345157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231ca50 is same with the state(6) to be set 00:22:44.110 starting I/O failed: -6 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 [2024-10-14 14:36:24.345243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.110 [2024-10-14 14:36:24.345335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cf20 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.345347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cf20 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.345355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231cf20 is same with the state(6) to be set 00:22:44.110 starting I/O failed: -6 00:22:44.110 starting I/O failed: -6 00:22:44.110 starting I/O failed: -6 00:22:44.110 [2024-10-14 14:36:24.345546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231d3f0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.345560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231d3f0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.345565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231d3f0 is same with the state(6) to be set 00:22:44.110 [2024-10-14 14:36:24.345570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231d3f0 is same with the state(6) to be set 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 starting I/O failed: -6 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 starting I/O failed: -6 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 starting I/O failed: -6 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 starting I/O failed: -6 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.110 starting I/O failed: -6 00:22:44.110 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 [2024-10-14 14:36:24.345832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2282200 is same with the state(6) to be set 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 [2024-10-14 14:36:24.345841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2282200 is same with the state(6) to be set 00:22:44.111 starting I/O failed: -6 00:22:44.111 [2024-10-14 14:36:24.345847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2282200 is same with the state(6) to be set 00:22:44.111 [2024-10-14 14:36:24.345852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2282200 is same with the state(6) to be set 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 [2024-10-14 14:36:24.345857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2282200 is same with the state(6) to be set 00:22:44.111 [2024-10-14 14:36:24.345862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2282200 is same with the state(6) to be set 00:22:44.111 [2024-10-14 14:36:24.345867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2282200 is same with the state(6) to be set 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 [2024-10-14 14:36:24.345871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2282200 is same with the state(6) to be set 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 [2024-10-14 14:36:24.346254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 [2024-10-14 14:36:24.347199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.111 starting I/O failed: -6 00:22:44.111 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 [2024-10-14 14:36:24.348696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.112 NVMe io qpair process completion error 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 [2024-10-14 14:36:24.349859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 [2024-10-14 14:36:24.350701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.112 starting I/O failed: -6 00:22:44.112 starting I/O failed: -6 00:22:44.112 starting I/O failed: -6 00:22:44.112 starting I/O failed: -6 00:22:44.112 starting I/O failed: -6 00:22:44.112 starting I/O failed: -6 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.112 starting I/O failed: -6 00:22:44.112 Write completed with error (sct=0, sc=8) 00:22:44.113 [2024-10-14 14:36:24.351878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 [2024-10-14 14:36:24.354381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.113 NVMe io qpair process completion error 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 [2024-10-14 14:36:24.355551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.113 starting I/O failed: -6 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 Write completed with error (sct=0, sc=8) 00:22:44.113 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 [2024-10-14 14:36:24.356405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 [2024-10-14 14:36:24.357348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.114 Write completed with error (sct=0, sc=8) 00:22:44.114 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 [2024-10-14 14:36:24.359405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.115 NVMe io qpair process completion error 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 [2024-10-14 14:36:24.360531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 [2024-10-14 14:36:24.361359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.115 starting I/O failed: -6 00:22:44.115 starting I/O failed: -6 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 [2024-10-14 14:36:24.362528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.115 starting I/O failed: -6 00:22:44.115 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 [2024-10-14 14:36:24.364212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.116 NVMe io qpair process completion error 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 [2024-10-14 14:36:24.365295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 [2024-10-14 14:36:24.366116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 starting I/O failed: -6 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.116 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 [2024-10-14 14:36:24.367032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 [2024-10-14 14:36:24.369649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.117 NVMe io qpair process completion error 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.117 starting I/O failed: -6 00:22:44.117 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 [2024-10-14 14:36:24.370650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 [2024-10-14 14:36:24.371500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 [2024-10-14 14:36:24.372456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.118 starting I/O failed: -6 00:22:44.118 starting I/O failed: -6 00:22:44.118 starting I/O failed: -6 00:22:44.118 starting I/O failed: -6 00:22:44.118 starting I/O failed: -6 00:22:44.118 starting I/O failed: -6 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.118 starting I/O failed: -6 00:22:44.118 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 [2024-10-14 14:36:24.374733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.119 NVMe io qpair process completion error 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 [2024-10-14 14:36:24.375771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.119 starting I/O failed: -6 00:22:44.119 starting I/O failed: -6 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 [2024-10-14 14:36:24.376791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 Write completed with error (sct=0, sc=8) 00:22:44.119 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 [2024-10-14 14:36:24.377741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 [2024-10-14 14:36:24.379711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.120 NVMe io qpair process completion error 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 [2024-10-14 14:36:24.380877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 starting I/O failed: -6 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.120 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 [2024-10-14 14:36:24.381822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 [2024-10-14 14:36:24.382727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.121 Write completed with error (sct=0, sc=8) 00:22:44.121 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 [2024-10-14 14:36:24.384160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.122 NVMe io qpair process completion error 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 [2024-10-14 14:36:24.385373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 [2024-10-14 14:36:24.386221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.122 [2024-10-14 14:36:24.387362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.122 Write completed with error (sct=0, sc=8) 00:22:44.122 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 [2024-10-14 14:36:24.388780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.123 NVMe io qpair process completion error 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 [2024-10-14 14:36:24.389834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.123 starting I/O failed: -6 00:22:44.123 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 [2024-10-14 14:36:24.390680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 [2024-10-14 14:36:24.391612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.124 Write completed with error (sct=0, sc=8) 00:22:44.124 starting I/O failed: -6 00:22:44.125 [2024-10-14 14:36:24.395038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:44.125 NVMe io qpair process completion error 00:22:44.125 Initializing NVMe Controllers 00:22:44.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:44.125 Controller IO queue size 128, less than required. 00:22:44.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:44.125 Controller IO queue size 128, less than required. 00:22:44.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:44.125 Controller IO queue size 128, less than required. 00:22:44.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:44.125 Controller IO queue size 128, less than required. 00:22:44.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:44.125 Controller IO queue size 128, less than required. 00:22:44.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:44.125 Controller IO queue size 128, less than required. 00:22:44.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:44.125 Controller IO queue size 128, less than required. 00:22:44.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:44.125 Controller IO queue size 128, less than required. 00:22:44.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:44.125 Controller IO queue size 128, less than required. 00:22:44.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:44.125 Controller IO queue size 128, less than required. 00:22:44.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:44.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:44.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:44.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:44.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:44.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:44.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:44.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:44.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:44.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:44.125 Initialization complete. Launching workers. 00:22:44.125 ======================================================== 00:22:44.125 Latency(us) 00:22:44.125 Device Information : IOPS MiB/s Average min max 00:22:44.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1863.72 80.08 68697.64 629.65 126365.26 00:22:44.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1865.41 80.15 68662.27 851.12 155539.39 00:22:44.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1866.04 80.18 68660.26 879.69 126429.88 00:22:44.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1891.18 81.26 67766.84 844.00 119846.58 00:22:44.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1809.23 77.74 70127.18 575.26 119637.27 00:22:44.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1838.80 79.01 69021.88 621.23 125302.15 00:22:44.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1873.86 80.52 67764.08 655.23 122240.51 00:22:44.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1851.47 79.56 68613.21 724.12 124221.85 00:22:44.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1873.65 80.51 67823.16 871.90 119429.60 00:22:44.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1906.17 81.91 66700.72 667.96 124339.84 00:22:44.125 ======================================================== 00:22:44.125 Total : 18639.53 800.92 68372.30 575.26 155539.39 00:22:44.125 00:22:44.125 [2024-10-14 14:36:24.398080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaba1b0 is same with the state(6) to be set 00:22:44.125 [2024-10-14 14:36:24.398125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaba4e0 is same with the state(6) to be set 00:22:44.125 [2024-10-14 14:36:24.398155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaba810 is same with the state(6) to be set 00:22:44.125 [2024-10-14 14:36:24.398185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabab40 is same with the state(6) to be set 00:22:44.125 [2024-10-14 14:36:24.398213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab95b0 is same with the state(6) to be set 00:22:44.125 [2024-10-14 14:36:24.398242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab9280 is same with the state(6) to be set 00:22:44.125 [2024-10-14 14:36:24.398270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab9fd0 is same with the state(6) to be set 00:22:44.125 [2024-10-14 14:36:24.398299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab98e0 is same with the state(6) to be set 00:22:44.125 [2024-10-14 14:36:24.398329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabb760 is same with the state(6) to be set 00:22:44.125 [2024-10-14 14:36:24.398358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xabb430 is same with the state(6) to be set 00:22:44.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:44.125 14:36:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:45.069 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3461445 00:22:45.069 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:45.069 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3461445 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3461445 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:45.070 rmmod nvme_tcp 00:22:45.070 rmmod nvme_fabrics 00:22:45.070 rmmod nvme_keyring 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 3461181 ']' 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 3461181 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 3461181 ']' 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 3461181 00:22:45.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3461181) - No such process 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 3461181 is not found' 00:22:45.070 Process with pid 3461181 is not found 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.070 14:36:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.616 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:47.616 00:22:47.616 real 0m10.285s 00:22:47.616 user 0m28.256s 00:22:47.616 sys 0m3.843s 00:22:47.616 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:47.616 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:47.616 ************************************ 00:22:47.616 END TEST nvmf_shutdown_tc4 00:22:47.616 ************************************ 00:22:47.616 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:47.616 00:22:47.616 real 0m43.865s 00:22:47.616 user 1m46.946s 00:22:47.616 sys 0m13.710s 00:22:47.616 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:47.616 14:36:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:47.616 ************************************ 00:22:47.616 END TEST nvmf_shutdown 00:22:47.616 ************************************ 00:22:47.616 14:36:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:47.616 00:22:47.616 real 12m48.127s 00:22:47.616 user 27m11.487s 00:22:47.616 sys 3m44.790s 00:22:47.616 14:36:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:47.616 14:36:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:47.616 ************************************ 00:22:47.616 END TEST nvmf_target_extra 00:22:47.616 ************************************ 00:22:47.616 14:36:27 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:47.616 14:36:27 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:47.616 14:36:27 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:47.616 14:36:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:47.616 ************************************ 00:22:47.616 START TEST nvmf_host 00:22:47.616 ************************************ 00:22:47.616 14:36:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:47.616 * Looking for test storage... 00:22:47.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:47.616 14:36:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:47.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.616 --rc genhtml_branch_coverage=1 00:22:47.616 --rc genhtml_function_coverage=1 00:22:47.616 --rc genhtml_legend=1 00:22:47.616 --rc geninfo_all_blocks=1 00:22:47.616 --rc geninfo_unexecuted_blocks=1 00:22:47.616 00:22:47.616 ' 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:47.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.616 --rc genhtml_branch_coverage=1 00:22:47.616 --rc genhtml_function_coverage=1 00:22:47.616 --rc genhtml_legend=1 00:22:47.616 --rc geninfo_all_blocks=1 00:22:47.616 --rc geninfo_unexecuted_blocks=1 00:22:47.616 00:22:47.616 ' 00:22:47.616 14:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:47.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.617 --rc genhtml_branch_coverage=1 00:22:47.617 --rc genhtml_function_coverage=1 00:22:47.617 --rc genhtml_legend=1 00:22:47.617 --rc geninfo_all_blocks=1 00:22:47.617 --rc geninfo_unexecuted_blocks=1 00:22:47.617 00:22:47.617 ' 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:47.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.617 --rc genhtml_branch_coverage=1 00:22:47.617 --rc genhtml_function_coverage=1 00:22:47.617 --rc genhtml_legend=1 00:22:47.617 --rc geninfo_all_blocks=1 00:22:47.617 --rc geninfo_unexecuted_blocks=1 00:22:47.617 00:22:47.617 ' 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:47.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.617 ************************************ 00:22:47.617 START TEST nvmf_multicontroller 00:22:47.617 ************************************ 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:47.617 * Looking for test storage... 00:22:47.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:22:47.617 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:47.880 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.881 --rc genhtml_branch_coverage=1 00:22:47.881 --rc genhtml_function_coverage=1 00:22:47.881 --rc genhtml_legend=1 00:22:47.881 --rc geninfo_all_blocks=1 00:22:47.881 --rc geninfo_unexecuted_blocks=1 00:22:47.881 00:22:47.881 ' 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.881 --rc genhtml_branch_coverage=1 00:22:47.881 --rc genhtml_function_coverage=1 00:22:47.881 --rc genhtml_legend=1 00:22:47.881 --rc geninfo_all_blocks=1 00:22:47.881 --rc geninfo_unexecuted_blocks=1 00:22:47.881 00:22:47.881 ' 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.881 --rc genhtml_branch_coverage=1 00:22:47.881 --rc genhtml_function_coverage=1 00:22:47.881 --rc genhtml_legend=1 00:22:47.881 --rc geninfo_all_blocks=1 00:22:47.881 --rc geninfo_unexecuted_blocks=1 00:22:47.881 00:22:47.881 ' 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:47.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:47.881 --rc genhtml_branch_coverage=1 00:22:47.881 --rc genhtml_function_coverage=1 00:22:47.881 --rc genhtml_legend=1 00:22:47.881 --rc geninfo_all_blocks=1 00:22:47.881 --rc geninfo_unexecuted_blocks=1 00:22:47.881 00:22:47.881 ' 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:47.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:47.881 14:36:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:56.030 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:56.030 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:56.030 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:56.031 Found net devices under 0000:31:00.0: cvl_0_0 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:56.031 Found net devices under 0000:31:00.1: cvl_0_1 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:56.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:22:56.031 00:22:56.031 --- 10.0.0.2 ping statistics --- 00:22:56.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.031 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:22:56.031 00:22:56.031 --- 10.0.0.1 ping statistics --- 00:22:56.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.031 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:56.031 14:36:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.031 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=3467234 00:22:56.031 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 3467234 00:22:56.031 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:56.031 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3467234 ']' 00:22:56.031 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.031 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:56.031 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.031 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:56.031 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.031 [2024-10-14 14:36:36.059153] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:22:56.031 [2024-10-14 14:36:36.059216] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.031 [2024-10-14 14:36:36.150206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:56.031 [2024-10-14 14:36:36.202289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.031 [2024-10-14 14:36:36.202337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.031 [2024-10-14 14:36:36.202345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.031 [2024-10-14 14:36:36.202352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.031 [2024-10-14 14:36:36.202358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.031 [2024-10-14 14:36:36.204264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.031 [2024-10-14 14:36:36.204539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.031 [2024-10-14 14:36:36.204542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.292 [2024-10-14 14:36:36.919098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.292 Malloc0 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.292 [2024-10-14 14:36:36.985083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.292 14:36:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.292 [2024-10-14 14:36:36.997028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:56.292 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.292 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:56.292 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.292 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.292 Malloc1 00:22:56.292 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.292 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:56.292 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.292 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3467288 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3467288 /var/tmp/bdevperf.sock 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3467288 ']' 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:56.553 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.814 NVMe0n1 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.814 1 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.814 request: 00:22:56.814 { 00:22:56.814 "name": "NVMe0", 00:22:56.814 "trtype": "tcp", 00:22:56.814 "traddr": "10.0.0.2", 00:22:56.814 "adrfam": "ipv4", 00:22:56.814 "trsvcid": "4420", 00:22:56.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.814 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:56.814 "hostaddr": "10.0.0.1", 00:22:56.814 "prchk_reftag": false, 00:22:56.814 "prchk_guard": false, 00:22:56.814 "hdgst": false, 00:22:56.814 "ddgst": false, 00:22:56.814 "allow_unrecognized_csi": false, 00:22:56.814 "method": "bdev_nvme_attach_controller", 00:22:56.814 "req_id": 1 00:22:56.814 } 00:22:56.814 Got JSON-RPC error response 00:22:56.814 response: 00:22:56.814 { 00:22:56.814 "code": -114, 00:22:56.814 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:56.814 } 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.814 request: 00:22:56.814 { 00:22:56.814 "name": "NVMe0", 00:22:56.814 "trtype": "tcp", 00:22:56.814 "traddr": "10.0.0.2", 00:22:56.814 "adrfam": "ipv4", 00:22:56.814 "trsvcid": "4420", 00:22:56.814 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:56.814 "hostaddr": "10.0.0.1", 00:22:56.814 "prchk_reftag": false, 00:22:56.814 "prchk_guard": false, 00:22:56.814 "hdgst": false, 00:22:56.814 "ddgst": false, 00:22:56.814 "allow_unrecognized_csi": false, 00:22:56.814 "method": "bdev_nvme_attach_controller", 00:22:56.814 "req_id": 1 00:22:56.814 } 00:22:56.814 Got JSON-RPC error response 00:22:56.814 response: 00:22:56.814 { 00:22:56.814 "code": -114, 00:22:56.814 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:56.814 } 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.814 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.814 request: 00:22:56.814 { 00:22:56.814 "name": "NVMe0", 00:22:56.814 "trtype": "tcp", 00:22:56.814 "traddr": "10.0.0.2", 00:22:56.814 "adrfam": "ipv4", 00:22:56.814 "trsvcid": "4420", 00:22:56.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.815 "hostaddr": "10.0.0.1", 00:22:56.815 "prchk_reftag": false, 00:22:56.815 "prchk_guard": false, 00:22:56.815 "hdgst": false, 00:22:56.815 "ddgst": false, 00:22:56.815 "multipath": "disable", 00:22:56.815 "allow_unrecognized_csi": false, 00:22:56.815 "method": "bdev_nvme_attach_controller", 00:22:56.815 "req_id": 1 00:22:56.815 } 00:22:56.815 Got JSON-RPC error response 00:22:56.815 response: 00:22:56.815 { 00:22:56.815 "code": -114, 00:22:56.815 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:56.815 } 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:56.815 request: 00:22:56.815 { 00:22:56.815 "name": "NVMe0", 00:22:56.815 "trtype": "tcp", 00:22:56.815 "traddr": "10.0.0.2", 00:22:56.815 "adrfam": "ipv4", 00:22:56.815 "trsvcid": "4420", 00:22:56.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.815 "hostaddr": "10.0.0.1", 00:22:56.815 "prchk_reftag": false, 00:22:56.815 "prchk_guard": false, 00:22:56.815 "hdgst": false, 00:22:56.815 "ddgst": false, 00:22:56.815 "multipath": "failover", 00:22:56.815 "allow_unrecognized_csi": false, 00:22:56.815 "method": "bdev_nvme_attach_controller", 00:22:56.815 "req_id": 1 00:22:56.815 } 00:22:56.815 Got JSON-RPC error response 00:22:56.815 response: 00:22:56.815 { 00:22:56.815 "code": -114, 00:22:56.815 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:56.815 } 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.815 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.075 NVMe0n1 00:22:57.075 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.075 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:57.075 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.075 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.075 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.075 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:57.075 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.075 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.075 00:22:57.075 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.075 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:57.075 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:57.075 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.075 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.335 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.335 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:57.335 14:36:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:58.276 { 00:22:58.276 "results": [ 00:22:58.276 { 00:22:58.276 "job": "NVMe0n1", 00:22:58.276 "core_mask": "0x1", 00:22:58.276 "workload": "write", 00:22:58.276 "status": "finished", 00:22:58.276 "queue_depth": 128, 00:22:58.276 "io_size": 4096, 00:22:58.276 "runtime": 1.005473, 00:22:58.276 "iops": 27134.492920247485, 00:22:58.276 "mibps": 105.99411296971674, 00:22:58.276 "io_failed": 0, 00:22:58.276 "io_timeout": 0, 00:22:58.276 "avg_latency_us": 4705.777493677381, 00:22:58.276 "min_latency_us": 2129.92, 00:22:58.276 "max_latency_us": 11523.413333333334 00:22:58.276 } 00:22:58.276 ], 00:22:58.276 "core_count": 1 00:22:58.276 } 00:22:58.276 14:36:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:58.276 14:36:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.276 14:36:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.276 14:36:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.276 14:36:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:58.276 14:36:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3467288 00:22:58.276 14:36:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3467288 ']' 00:22:58.276 14:36:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3467288 00:22:58.276 14:36:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:58.276 14:36:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:58.276 14:36:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3467288 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3467288' 00:22:58.537 killing process with pid 3467288 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3467288 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3467288 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:22:58.537 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:22:58.537 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:58.538 [2024-10-14 14:36:37.116948] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:22:58.538 [2024-10-14 14:36:37.117008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467288 ] 00:22:58.538 [2024-10-14 14:36:37.178647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.538 [2024-10-14 14:36:37.214897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.538 [2024-10-14 14:36:37.792682] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 68c56e30-9451-4e16-8ff0-ae0ae53c9350 already exists 00:22:58.538 [2024-10-14 14:36:37.792711] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:68c56e30-9451-4e16-8ff0-ae0ae53c9350 alias for bdev NVMe1n1 00:22:58.538 [2024-10-14 14:36:37.792719] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:58.538 Running I/O for 1 seconds... 00:22:58.538 27105.00 IOPS, 105.88 MiB/s 00:22:58.538 Latency(us) 00:22:58.538 [2024-10-14T12:36:39.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.538 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:58.538 NVMe0n1 : 1.01 27134.49 105.99 0.00 0.00 4705.78 2129.92 11523.41 00:22:58.538 [2024-10-14T12:36:39.265Z] =================================================================================================================== 00:22:58.538 [2024-10-14T12:36:39.265Z] Total : 27134.49 105.99 0.00 0.00 4705.78 2129.92 11523.41 00:22:58.538 Received shutdown signal, test time was about 1.000000 seconds 00:22:58.538 00:22:58.538 Latency(us) 00:22:58.538 [2024-10-14T12:36:39.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.538 [2024-10-14T12:36:39.265Z] =================================================================================================================== 00:22:58.538 [2024-10-14T12:36:39.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:58.538 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:58.538 rmmod nvme_tcp 00:22:58.538 rmmod nvme_fabrics 00:22:58.538 rmmod nvme_keyring 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 3467234 ']' 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 3467234 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3467234 ']' 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3467234 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:58.538 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3467234 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3467234' 00:22:58.799 killing process with pid 3467234 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3467234 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3467234 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.799 14:36:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.347 14:36:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:01.347 00:23:01.347 real 0m13.360s 00:23:01.347 user 0m14.280s 00:23:01.347 sys 0m6.405s 00:23:01.347 14:36:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:01.347 14:36:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.347 ************************************ 00:23:01.347 END TEST nvmf_multicontroller 00:23:01.347 ************************************ 00:23:01.347 14:36:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:01.347 14:36:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:01.347 14:36:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:01.347 14:36:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.347 ************************************ 00:23:01.347 START TEST nvmf_aer 00:23:01.348 ************************************ 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:01.348 * Looking for test storage... 00:23:01.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:01.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.348 --rc genhtml_branch_coverage=1 00:23:01.348 --rc genhtml_function_coverage=1 00:23:01.348 --rc genhtml_legend=1 00:23:01.348 --rc geninfo_all_blocks=1 00:23:01.348 --rc geninfo_unexecuted_blocks=1 00:23:01.348 00:23:01.348 ' 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:01.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.348 --rc genhtml_branch_coverage=1 00:23:01.348 --rc genhtml_function_coverage=1 00:23:01.348 --rc genhtml_legend=1 00:23:01.348 --rc geninfo_all_blocks=1 00:23:01.348 --rc geninfo_unexecuted_blocks=1 00:23:01.348 00:23:01.348 ' 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:01.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.348 --rc genhtml_branch_coverage=1 00:23:01.348 --rc genhtml_function_coverage=1 00:23:01.348 --rc genhtml_legend=1 00:23:01.348 --rc geninfo_all_blocks=1 00:23:01.348 --rc geninfo_unexecuted_blocks=1 00:23:01.348 00:23:01.348 ' 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:01.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.348 --rc genhtml_branch_coverage=1 00:23:01.348 --rc genhtml_function_coverage=1 00:23:01.348 --rc genhtml_legend=1 00:23:01.348 --rc geninfo_all_blocks=1 00:23:01.348 --rc geninfo_unexecuted_blocks=1 00:23:01.348 00:23:01.348 ' 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:01.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:01.348 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:01.349 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:01.349 14:36:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:09.500 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:09.500 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:09.500 Found net devices under 0000:31:00.0: cvl_0_0 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:09.500 Found net devices under 0000:31:00.1: cvl_0_1 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.500 14:36:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.500 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.500 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.500 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.500 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.500 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:23:09.501 00:23:09.501 --- 10.0.0.2 ping statistics --- 00:23:09.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.501 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:23:09.501 00:23:09.501 --- 10.0.0.1 ping statistics --- 00:23:09.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.501 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=3472006 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 3472006 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3472006 ']' 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.501 14:36:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.501 [2024-10-14 14:36:49.351289] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:23:09.501 [2024-10-14 14:36:49.351352] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.501 [2024-10-14 14:36:49.426721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.501 [2024-10-14 14:36:49.470708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.501 [2024-10-14 14:36:49.470747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.501 [2024-10-14 14:36:49.470755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.501 [2024-10-14 14:36:49.470762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.501 [2024-10-14 14:36:49.470768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.501 [2024-10-14 14:36:49.472726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.501 [2024-10-14 14:36:49.472846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.501 [2024-10-14 14:36:49.473008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.501 [2024-10-14 14:36:49.473008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.501 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.501 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:09.501 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:09.501 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.501 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.501 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.501 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.501 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.501 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.501 [2024-10-14 14:36:50.212441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.501 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.501 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:09.501 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.501 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.762 Malloc0 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.762 [2024-10-14 14:36:50.285248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.762 [ 00:23:09.762 { 00:23:09.762 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:09.762 "subtype": "Discovery", 00:23:09.762 "listen_addresses": [], 00:23:09.762 "allow_any_host": true, 00:23:09.762 "hosts": [] 00:23:09.762 }, 00:23:09.762 { 00:23:09.762 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.762 "subtype": "NVMe", 00:23:09.762 "listen_addresses": [ 00:23:09.762 { 00:23:09.762 "trtype": "TCP", 00:23:09.762 "adrfam": "IPv4", 00:23:09.762 "traddr": "10.0.0.2", 00:23:09.762 "trsvcid": "4420" 00:23:09.762 } 00:23:09.762 ], 00:23:09.762 "allow_any_host": true, 00:23:09.762 "hosts": [], 00:23:09.762 "serial_number": "SPDK00000000000001", 00:23:09.762 "model_number": "SPDK bdev Controller", 00:23:09.762 "max_namespaces": 2, 00:23:09.762 "min_cntlid": 1, 00:23:09.762 "max_cntlid": 65519, 00:23:09.762 "namespaces": [ 00:23:09.762 { 00:23:09.762 "nsid": 1, 00:23:09.762 "bdev_name": "Malloc0", 00:23:09.762 "name": "Malloc0", 00:23:09.762 "nguid": "760906938BA54C5E81860283B97F388D", 00:23:09.762 "uuid": "76090693-8ba5-4c5e-8186-0283b97f388d" 00:23:09.762 } 00:23:09.762 ] 00:23:09.762 } 00:23:09.762 ] 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3472359 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:09.762 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:10.023 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:10.023 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:10.023 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:10.023 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:10.023 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.023 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.023 Malloc1 00:23:10.023 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.023 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:10.023 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.023 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.023 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.023 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:10.023 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.023 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.023 Asynchronous Event Request test 00:23:10.023 Attaching to 10.0.0.2 00:23:10.023 Attached to 10.0.0.2 00:23:10.023 Registering asynchronous event callbacks... 00:23:10.023 Starting namespace attribute notice tests for all controllers... 00:23:10.023 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:10.023 aer_cb - Changed Namespace 00:23:10.023 Cleaning up... 00:23:10.023 [ 00:23:10.023 { 00:23:10.023 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:10.023 "subtype": "Discovery", 00:23:10.023 "listen_addresses": [], 00:23:10.023 "allow_any_host": true, 00:23:10.023 "hosts": [] 00:23:10.023 }, 00:23:10.023 { 00:23:10.023 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.023 "subtype": "NVMe", 00:23:10.023 "listen_addresses": [ 00:23:10.023 { 00:23:10.023 "trtype": "TCP", 00:23:10.023 "adrfam": "IPv4", 00:23:10.023 "traddr": "10.0.0.2", 00:23:10.023 "trsvcid": "4420" 00:23:10.023 } 00:23:10.023 ], 00:23:10.023 "allow_any_host": true, 00:23:10.023 "hosts": [], 00:23:10.023 "serial_number": "SPDK00000000000001", 00:23:10.023 "model_number": "SPDK bdev Controller", 00:23:10.023 "max_namespaces": 2, 00:23:10.023 "min_cntlid": 1, 00:23:10.023 "max_cntlid": 65519, 00:23:10.023 "namespaces": [ 00:23:10.023 { 00:23:10.023 "nsid": 1, 00:23:10.023 "bdev_name": "Malloc0", 00:23:10.023 "name": "Malloc0", 00:23:10.023 "nguid": "760906938BA54C5E81860283B97F388D", 00:23:10.023 "uuid": "76090693-8ba5-4c5e-8186-0283b97f388d" 00:23:10.023 }, 00:23:10.024 { 00:23:10.024 "nsid": 2, 00:23:10.024 "bdev_name": "Malloc1", 00:23:10.024 "name": "Malloc1", 00:23:10.024 "nguid": "8358CA7BD5354921B9939BD393119C43", 00:23:10.024 "uuid": "8358ca7b-d535-4921-b993-9bd393119c43" 00:23:10.024 } 00:23:10.024 ] 00:23:10.024 } 00:23:10.024 ] 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3472359 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:10.024 rmmod nvme_tcp 00:23:10.024 rmmod nvme_fabrics 00:23:10.024 rmmod nvme_keyring 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 3472006 ']' 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 3472006 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3472006 ']' 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3472006 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:10.024 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3472006 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3472006' 00:23:10.285 killing process with pid 3472006 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3472006 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3472006 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.285 14:36:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.832 14:36:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:12.832 00:23:12.832 real 0m11.353s 00:23:12.832 user 0m7.962s 00:23:12.832 sys 0m5.988s 00:23:12.832 14:36:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:12.832 14:36:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:12.832 ************************************ 00:23:12.832 END TEST nvmf_aer 00:23:12.832 ************************************ 00:23:12.832 14:36:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:12.832 14:36:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:12.832 14:36:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:12.832 14:36:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.832 ************************************ 00:23:12.832 START TEST nvmf_async_init 00:23:12.832 ************************************ 00:23:12.832 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:12.832 * Looking for test storage... 00:23:12.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:12.832 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:12.832 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:12.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.833 --rc genhtml_branch_coverage=1 00:23:12.833 --rc genhtml_function_coverage=1 00:23:12.833 --rc genhtml_legend=1 00:23:12.833 --rc geninfo_all_blocks=1 00:23:12.833 --rc geninfo_unexecuted_blocks=1 00:23:12.833 00:23:12.833 ' 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:12.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.833 --rc genhtml_branch_coverage=1 00:23:12.833 --rc genhtml_function_coverage=1 00:23:12.833 --rc genhtml_legend=1 00:23:12.833 --rc geninfo_all_blocks=1 00:23:12.833 --rc geninfo_unexecuted_blocks=1 00:23:12.833 00:23:12.833 ' 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:12.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.833 --rc genhtml_branch_coverage=1 00:23:12.833 --rc genhtml_function_coverage=1 00:23:12.833 --rc genhtml_legend=1 00:23:12.833 --rc geninfo_all_blocks=1 00:23:12.833 --rc geninfo_unexecuted_blocks=1 00:23:12.833 00:23:12.833 ' 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:12.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.833 --rc genhtml_branch_coverage=1 00:23:12.833 --rc genhtml_function_coverage=1 00:23:12.833 --rc genhtml_legend=1 00:23:12.833 --rc geninfo_all_blocks=1 00:23:12.833 --rc geninfo_unexecuted_blocks=1 00:23:12.833 00:23:12.833 ' 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:12.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d8453d5edb43416ba65af47b2cfb341a 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:12.833 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.834 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:12.834 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:12.834 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:12.834 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.834 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.834 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.834 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:12.834 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:12.834 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:12.834 14:36:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:20.979 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:20.979 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:20.979 Found net devices under 0000:31:00.0: cvl_0_0 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:20.979 Found net devices under 0000:31:00.1: cvl_0_1 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.979 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:23:20.980 00:23:20.980 --- 10.0.0.2 ping statistics --- 00:23:20.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.980 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:23:20.980 00:23:20.980 --- 10.0.0.1 ping statistics --- 00:23:20.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.980 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=3476701 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 3476701 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3476701 ']' 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.980 14:37:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:20.980 [2024-10-14 14:37:00.868913] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:23:20.980 [2024-10-14 14:37:00.868982] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.980 [2024-10-14 14:37:00.943761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.980 [2024-10-14 14:37:00.986367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.980 [2024-10-14 14:37:00.986406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.980 [2024-10-14 14:37:00.986414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.980 [2024-10-14 14:37:00.986421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.980 [2024-10-14 14:37:00.986427] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.980 [2024-10-14 14:37:00.987097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.980 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.980 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:20.980 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:20.980 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:20.980 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.241 [2024-10-14 14:37:01.725120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.241 null0 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d8453d5edb43416ba65af47b2cfb341a 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.241 [2024-10-14 14:37:01.781394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.241 14:37:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.502 nvme0n1 00:23:21.502 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.502 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:21.502 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.502 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.502 [ 00:23:21.502 { 00:23:21.502 "name": "nvme0n1", 00:23:21.502 "aliases": [ 00:23:21.502 "d8453d5e-db43-416b-a65a-f47b2cfb341a" 00:23:21.502 ], 00:23:21.502 "product_name": "NVMe disk", 00:23:21.502 "block_size": 512, 00:23:21.502 "num_blocks": 2097152, 00:23:21.502 "uuid": "d8453d5e-db43-416b-a65a-f47b2cfb341a", 00:23:21.502 "numa_id": 0, 00:23:21.502 "assigned_rate_limits": { 00:23:21.502 "rw_ios_per_sec": 0, 00:23:21.502 "rw_mbytes_per_sec": 0, 00:23:21.502 "r_mbytes_per_sec": 0, 00:23:21.502 "w_mbytes_per_sec": 0 00:23:21.502 }, 00:23:21.502 "claimed": false, 00:23:21.502 "zoned": false, 00:23:21.502 "supported_io_types": { 00:23:21.502 "read": true, 00:23:21.502 "write": true, 00:23:21.502 "unmap": false, 00:23:21.502 "flush": true, 00:23:21.502 "reset": true, 00:23:21.502 "nvme_admin": true, 00:23:21.502 "nvme_io": true, 00:23:21.502 "nvme_io_md": false, 00:23:21.502 "write_zeroes": true, 00:23:21.502 "zcopy": false, 00:23:21.502 "get_zone_info": false, 00:23:21.502 "zone_management": false, 00:23:21.502 "zone_append": false, 00:23:21.502 "compare": true, 00:23:21.502 "compare_and_write": true, 00:23:21.502 "abort": true, 00:23:21.502 "seek_hole": false, 00:23:21.502 "seek_data": false, 00:23:21.502 "copy": true, 00:23:21.502 "nvme_iov_md": false 00:23:21.502 }, 00:23:21.502 "memory_domains": [ 00:23:21.502 { 00:23:21.502 "dma_device_id": "system", 00:23:21.502 "dma_device_type": 1 00:23:21.502 } 00:23:21.502 ], 00:23:21.502 "driver_specific": { 00:23:21.502 "nvme": [ 00:23:21.502 { 00:23:21.502 "trid": { 00:23:21.502 "trtype": "TCP", 00:23:21.502 "adrfam": "IPv4", 00:23:21.502 "traddr": "10.0.0.2", 00:23:21.502 "trsvcid": "4420", 00:23:21.502 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:21.502 }, 00:23:21.502 "ctrlr_data": { 00:23:21.502 "cntlid": 1, 00:23:21.502 "vendor_id": "0x8086", 00:23:21.502 "model_number": "SPDK bdev Controller", 00:23:21.502 "serial_number": "00000000000000000000", 00:23:21.502 "firmware_revision": "25.01", 00:23:21.502 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:21.502 "oacs": { 00:23:21.502 "security": 0, 00:23:21.502 "format": 0, 00:23:21.502 "firmware": 0, 00:23:21.502 "ns_manage": 0 00:23:21.502 }, 00:23:21.502 "multi_ctrlr": true, 00:23:21.502 "ana_reporting": false 00:23:21.502 }, 00:23:21.502 "vs": { 00:23:21.502 "nvme_version": "1.3" 00:23:21.502 }, 00:23:21.502 "ns_data": { 00:23:21.502 "id": 1, 00:23:21.502 "can_share": true 00:23:21.502 } 00:23:21.502 } 00:23:21.502 ], 00:23:21.502 "mp_policy": "active_passive" 00:23:21.502 } 00:23:21.502 } 00:23:21.502 ] 00:23:21.502 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.502 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:21.502 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.502 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.502 [2024-10-14 14:37:02.050398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:21.502 [2024-10-14 14:37:02.050460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf2ac10 (9): Bad file descriptor 00:23:21.502 [2024-10-14 14:37:02.182163] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:21.502 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.502 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:21.502 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.502 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.502 [ 00:23:21.502 { 00:23:21.502 "name": "nvme0n1", 00:23:21.502 "aliases": [ 00:23:21.502 "d8453d5e-db43-416b-a65a-f47b2cfb341a" 00:23:21.502 ], 00:23:21.502 "product_name": "NVMe disk", 00:23:21.502 "block_size": 512, 00:23:21.502 "num_blocks": 2097152, 00:23:21.502 "uuid": "d8453d5e-db43-416b-a65a-f47b2cfb341a", 00:23:21.502 "numa_id": 0, 00:23:21.502 "assigned_rate_limits": { 00:23:21.502 "rw_ios_per_sec": 0, 00:23:21.502 "rw_mbytes_per_sec": 0, 00:23:21.502 "r_mbytes_per_sec": 0, 00:23:21.502 "w_mbytes_per_sec": 0 00:23:21.502 }, 00:23:21.502 "claimed": false, 00:23:21.502 "zoned": false, 00:23:21.502 "supported_io_types": { 00:23:21.502 "read": true, 00:23:21.502 "write": true, 00:23:21.502 "unmap": false, 00:23:21.502 "flush": true, 00:23:21.502 "reset": true, 00:23:21.502 "nvme_admin": true, 00:23:21.502 "nvme_io": true, 00:23:21.502 "nvme_io_md": false, 00:23:21.502 "write_zeroes": true, 00:23:21.502 "zcopy": false, 00:23:21.502 "get_zone_info": false, 00:23:21.502 "zone_management": false, 00:23:21.502 "zone_append": false, 00:23:21.502 "compare": true, 00:23:21.502 "compare_and_write": true, 00:23:21.503 "abort": true, 00:23:21.503 "seek_hole": false, 00:23:21.503 "seek_data": false, 00:23:21.503 "copy": true, 00:23:21.503 "nvme_iov_md": false 00:23:21.503 }, 00:23:21.503 "memory_domains": [ 00:23:21.503 { 00:23:21.503 "dma_device_id": "system", 00:23:21.503 "dma_device_type": 1 00:23:21.503 } 00:23:21.503 ], 00:23:21.503 "driver_specific": { 00:23:21.503 "nvme": [ 00:23:21.503 { 00:23:21.503 "trid": { 00:23:21.503 "trtype": "TCP", 00:23:21.503 "adrfam": "IPv4", 00:23:21.503 "traddr": "10.0.0.2", 00:23:21.503 "trsvcid": "4420", 00:23:21.503 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:21.503 }, 00:23:21.503 "ctrlr_data": { 00:23:21.503 "cntlid": 2, 00:23:21.503 "vendor_id": "0x8086", 00:23:21.503 "model_number": "SPDK bdev Controller", 00:23:21.503 "serial_number": "00000000000000000000", 00:23:21.503 "firmware_revision": "25.01", 00:23:21.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:21.503 "oacs": { 00:23:21.503 "security": 0, 00:23:21.503 "format": 0, 00:23:21.503 "firmware": 0, 00:23:21.503 "ns_manage": 0 00:23:21.503 }, 00:23:21.503 "multi_ctrlr": true, 00:23:21.503 "ana_reporting": false 00:23:21.503 }, 00:23:21.503 "vs": { 00:23:21.503 "nvme_version": "1.3" 00:23:21.503 }, 00:23:21.503 "ns_data": { 00:23:21.503 "id": 1, 00:23:21.503 "can_share": true 00:23:21.503 } 00:23:21.503 } 00:23:21.503 ], 00:23:21.503 "mp_policy": "active_passive" 00:23:21.503 } 00:23:21.503 } 00:23:21.503 ] 00:23:21.503 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.503 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.503 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.503 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.503 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.503 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.jXk5O4Z4IR 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.jXk5O4Z4IR 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.jXk5O4Z4IR 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.763 [2024-10-14 14:37:02.267141] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:21.763 [2024-10-14 14:37:02.267265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.763 [2024-10-14 14:37:02.291219] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.763 nvme0n1 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.763 [ 00:23:21.763 { 00:23:21.763 "name": "nvme0n1", 00:23:21.763 "aliases": [ 00:23:21.763 "d8453d5e-db43-416b-a65a-f47b2cfb341a" 00:23:21.763 ], 00:23:21.763 "product_name": "NVMe disk", 00:23:21.763 "block_size": 512, 00:23:21.763 "num_blocks": 2097152, 00:23:21.763 "uuid": "d8453d5e-db43-416b-a65a-f47b2cfb341a", 00:23:21.763 "numa_id": 0, 00:23:21.763 "assigned_rate_limits": { 00:23:21.763 "rw_ios_per_sec": 0, 00:23:21.763 "rw_mbytes_per_sec": 0, 00:23:21.763 "r_mbytes_per_sec": 0, 00:23:21.763 "w_mbytes_per_sec": 0 00:23:21.763 }, 00:23:21.763 "claimed": false, 00:23:21.763 "zoned": false, 00:23:21.763 "supported_io_types": { 00:23:21.763 "read": true, 00:23:21.763 "write": true, 00:23:21.763 "unmap": false, 00:23:21.763 "flush": true, 00:23:21.763 "reset": true, 00:23:21.763 "nvme_admin": true, 00:23:21.763 "nvme_io": true, 00:23:21.763 "nvme_io_md": false, 00:23:21.763 "write_zeroes": true, 00:23:21.763 "zcopy": false, 00:23:21.763 "get_zone_info": false, 00:23:21.763 "zone_management": false, 00:23:21.763 "zone_append": false, 00:23:21.763 "compare": true, 00:23:21.763 "compare_and_write": true, 00:23:21.763 "abort": true, 00:23:21.763 "seek_hole": false, 00:23:21.763 "seek_data": false, 00:23:21.763 "copy": true, 00:23:21.763 "nvme_iov_md": false 00:23:21.763 }, 00:23:21.763 "memory_domains": [ 00:23:21.763 { 00:23:21.763 "dma_device_id": "system", 00:23:21.763 "dma_device_type": 1 00:23:21.763 } 00:23:21.763 ], 00:23:21.763 "driver_specific": { 00:23:21.763 "nvme": [ 00:23:21.763 { 00:23:21.763 "trid": { 00:23:21.763 "trtype": "TCP", 00:23:21.763 "adrfam": "IPv4", 00:23:21.763 "traddr": "10.0.0.2", 00:23:21.763 "trsvcid": "4421", 00:23:21.763 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:21.763 }, 00:23:21.763 "ctrlr_data": { 00:23:21.763 "cntlid": 3, 00:23:21.763 "vendor_id": "0x8086", 00:23:21.763 "model_number": "SPDK bdev Controller", 00:23:21.763 "serial_number": "00000000000000000000", 00:23:21.763 "firmware_revision": "25.01", 00:23:21.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:21.763 "oacs": { 00:23:21.763 "security": 0, 00:23:21.763 "format": 0, 00:23:21.763 "firmware": 0, 00:23:21.763 "ns_manage": 0 00:23:21.763 }, 00:23:21.763 "multi_ctrlr": true, 00:23:21.763 "ana_reporting": false 00:23:21.763 }, 00:23:21.763 "vs": { 00:23:21.763 "nvme_version": "1.3" 00:23:21.763 }, 00:23:21.763 "ns_data": { 00:23:21.763 "id": 1, 00:23:21.763 "can_share": true 00:23:21.763 } 00:23:21.763 } 00:23:21.763 ], 00:23:21.763 "mp_policy": "active_passive" 00:23:21.763 } 00:23:21.763 } 00:23:21.763 ] 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.jXk5O4Z4IR 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:21.763 rmmod nvme_tcp 00:23:21.763 rmmod nvme_fabrics 00:23:21.763 rmmod nvme_keyring 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 3476701 ']' 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 3476701 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3476701 ']' 00:23:21.763 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3476701 00:23:21.764 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:21.764 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:21.764 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3476701 00:23:22.023 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:22.023 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:22.023 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3476701' 00:23:22.023 killing process with pid 3476701 00:23:22.023 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3476701 00:23:22.023 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3476701 00:23:22.023 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:22.024 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:22.024 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:22.024 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:22.024 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:23:22.024 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:22.024 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:23:22.024 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.024 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.024 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.024 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.024 14:37:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.567 14:37:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:24.567 00:23:24.567 real 0m11.687s 00:23:24.567 user 0m4.246s 00:23:24.567 sys 0m5.965s 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:24.568 ************************************ 00:23:24.568 END TEST nvmf_async_init 00:23:24.568 ************************************ 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.568 ************************************ 00:23:24.568 START TEST dma 00:23:24.568 ************************************ 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:24.568 * Looking for test storage... 00:23:24.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:24.568 14:37:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:24.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.568 --rc genhtml_branch_coverage=1 00:23:24.568 --rc genhtml_function_coverage=1 00:23:24.568 --rc genhtml_legend=1 00:23:24.568 --rc geninfo_all_blocks=1 00:23:24.568 --rc geninfo_unexecuted_blocks=1 00:23:24.568 00:23:24.568 ' 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:24.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.568 --rc genhtml_branch_coverage=1 00:23:24.568 --rc genhtml_function_coverage=1 00:23:24.568 --rc genhtml_legend=1 00:23:24.568 --rc geninfo_all_blocks=1 00:23:24.568 --rc geninfo_unexecuted_blocks=1 00:23:24.568 00:23:24.568 ' 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:24.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.568 --rc genhtml_branch_coverage=1 00:23:24.568 --rc genhtml_function_coverage=1 00:23:24.568 --rc genhtml_legend=1 00:23:24.568 --rc geninfo_all_blocks=1 00:23:24.568 --rc geninfo_unexecuted_blocks=1 00:23:24.568 00:23:24.568 ' 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:24.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.568 --rc genhtml_branch_coverage=1 00:23:24.568 --rc genhtml_function_coverage=1 00:23:24.568 --rc genhtml_legend=1 00:23:24.568 --rc geninfo_all_blocks=1 00:23:24.568 --rc geninfo_unexecuted_blocks=1 00:23:24.568 00:23:24.568 ' 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:24.568 00:23:24.568 real 0m0.225s 00:23:24.568 user 0m0.140s 00:23:24.568 sys 0m0.098s 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:24.568 ************************************ 00:23:24.568 END TEST dma 00:23:24.568 ************************************ 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.568 ************************************ 00:23:24.568 START TEST nvmf_identify 00:23:24.568 ************************************ 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:24.568 * Looking for test storage... 00:23:24.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:23:24.568 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:24.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.831 --rc genhtml_branch_coverage=1 00:23:24.831 --rc genhtml_function_coverage=1 00:23:24.831 --rc genhtml_legend=1 00:23:24.831 --rc geninfo_all_blocks=1 00:23:24.831 --rc geninfo_unexecuted_blocks=1 00:23:24.831 00:23:24.831 ' 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:24.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.831 --rc genhtml_branch_coverage=1 00:23:24.831 --rc genhtml_function_coverage=1 00:23:24.831 --rc genhtml_legend=1 00:23:24.831 --rc geninfo_all_blocks=1 00:23:24.831 --rc geninfo_unexecuted_blocks=1 00:23:24.831 00:23:24.831 ' 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:24.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.831 --rc genhtml_branch_coverage=1 00:23:24.831 --rc genhtml_function_coverage=1 00:23:24.831 --rc genhtml_legend=1 00:23:24.831 --rc geninfo_all_blocks=1 00:23:24.831 --rc geninfo_unexecuted_blocks=1 00:23:24.831 00:23:24.831 ' 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:24.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.831 --rc genhtml_branch_coverage=1 00:23:24.831 --rc genhtml_function_coverage=1 00:23:24.831 --rc genhtml_legend=1 00:23:24.831 --rc geninfo_all_blocks=1 00:23:24.831 --rc geninfo_unexecuted_blocks=1 00:23:24.831 00:23:24.831 ' 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:24.831 14:37:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.979 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:32.980 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:32.980 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:32.980 Found net devices under 0000:31:00.0: cvl_0_0 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:32.980 Found net devices under 0000:31:00.1: cvl_0_1 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:32.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:23:32.980 00:23:32.980 --- 10.0.0.2 ping statistics --- 00:23:32.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.980 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:23:32.980 00:23:32.980 --- 10.0.0.1 ping statistics --- 00:23:32.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.980 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3481294 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3481294 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3481294 ']' 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.980 14:37:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:32.980 [2024-10-14 14:37:12.769460] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:23:32.980 [2024-10-14 14:37:12.769522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.980 [2024-10-14 14:37:12.844511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.980 [2024-10-14 14:37:12.885542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.980 [2024-10-14 14:37:12.885575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.980 [2024-10-14 14:37:12.885582] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.980 [2024-10-14 14:37:12.885589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.980 [2024-10-14 14:37:12.885595] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.980 [2024-10-14 14:37:12.887056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.980 [2024-10-14 14:37:12.887085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.980 [2024-10-14 14:37:12.887143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.981 [2024-10-14 14:37:12.887144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:32.981 [2024-10-14 14:37:13.572847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:32.981 Malloc0 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:32.981 [2024-10-14 14:37:13.680395] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.981 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:33.243 [ 00:23:33.244 { 00:23:33.244 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:33.244 "subtype": "Discovery", 00:23:33.244 "listen_addresses": [ 00:23:33.244 { 00:23:33.244 "trtype": "TCP", 00:23:33.244 "adrfam": "IPv4", 00:23:33.244 "traddr": "10.0.0.2", 00:23:33.244 "trsvcid": "4420" 00:23:33.244 } 00:23:33.244 ], 00:23:33.244 "allow_any_host": true, 00:23:33.244 "hosts": [] 00:23:33.244 }, 00:23:33.244 { 00:23:33.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.244 "subtype": "NVMe", 00:23:33.244 "listen_addresses": [ 00:23:33.244 { 00:23:33.244 "trtype": "TCP", 00:23:33.244 "adrfam": "IPv4", 00:23:33.244 "traddr": "10.0.0.2", 00:23:33.244 "trsvcid": "4420" 00:23:33.244 } 00:23:33.244 ], 00:23:33.244 "allow_any_host": true, 00:23:33.244 "hosts": [], 00:23:33.244 "serial_number": "SPDK00000000000001", 00:23:33.244 "model_number": "SPDK bdev Controller", 00:23:33.244 "max_namespaces": 32, 00:23:33.244 "min_cntlid": 1, 00:23:33.244 "max_cntlid": 65519, 00:23:33.244 "namespaces": [ 00:23:33.244 { 00:23:33.244 "nsid": 1, 00:23:33.244 "bdev_name": "Malloc0", 00:23:33.244 "name": "Malloc0", 00:23:33.244 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:33.244 "eui64": "ABCDEF0123456789", 00:23:33.244 "uuid": "8a1aa7e5-9834-45fd-8a67-fe1bc98f83a0" 00:23:33.244 } 00:23:33.244 ] 00:23:33.244 } 00:23:33.244 ] 00:23:33.244 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.244 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:33.244 [2024-10-14 14:37:13.744309] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:23:33.244 [2024-10-14 14:37:13.744350] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3481574 ] 00:23:33.244 [2024-10-14 14:37:13.777727] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:33.244 [2024-10-14 14:37:13.777773] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:33.244 [2024-10-14 14:37:13.777779] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:33.244 [2024-10-14 14:37:13.777790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:33.244 [2024-10-14 14:37:13.777800] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:33.244 [2024-10-14 14:37:13.781341] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:33.244 [2024-10-14 14:37:13.781374] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ba1620 0 00:23:33.244 [2024-10-14 14:37:13.789076] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:33.244 [2024-10-14 14:37:13.789089] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:33.244 [2024-10-14 14:37:13.789094] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:33.244 [2024-10-14 14:37:13.789097] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:33.244 [2024-10-14 14:37:13.789129] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.789136] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.789140] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba1620) 00:23:33.244 [2024-10-14 14:37:13.789153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:33.244 [2024-10-14 14:37:13.789171] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01480, cid 0, qid 0 00:23:33.244 [2024-10-14 14:37:13.796075] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.244 [2024-10-14 14:37:13.796086] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.244 [2024-10-14 14:37:13.796090] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.796095] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01480) on tqpair=0x1ba1620 00:23:33.244 [2024-10-14 14:37:13.796104] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:33.244 [2024-10-14 14:37:13.796111] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:33.244 [2024-10-14 14:37:13.796117] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:33.244 [2024-10-14 14:37:13.796131] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.796135] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.796139] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba1620) 00:23:33.244 [2024-10-14 14:37:13.796150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.244 [2024-10-14 14:37:13.796166] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01480, cid 0, qid 0 00:23:33.244 [2024-10-14 14:37:13.796351] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.244 [2024-10-14 14:37:13.796358] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.244 [2024-10-14 14:37:13.796361] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.796365] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01480) on tqpair=0x1ba1620 00:23:33.244 [2024-10-14 14:37:13.796371] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:33.244 [2024-10-14 14:37:13.796378] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:33.244 [2024-10-14 14:37:13.796385] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.796389] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.796393] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba1620) 00:23:33.244 [2024-10-14 14:37:13.796399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.244 [2024-10-14 14:37:13.796410] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01480, cid 0, qid 0 00:23:33.244 [2024-10-14 14:37:13.796601] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.244 [2024-10-14 14:37:13.796607] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.244 [2024-10-14 14:37:13.796611] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.796615] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01480) on tqpair=0x1ba1620 00:23:33.244 [2024-10-14 14:37:13.796620] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:33.244 [2024-10-14 14:37:13.796628] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:33.244 [2024-10-14 14:37:13.796635] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.796639] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.796642] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba1620) 00:23:33.244 [2024-10-14 14:37:13.796649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.244 [2024-10-14 14:37:13.796659] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01480, cid 0, qid 0 00:23:33.244 [2024-10-14 14:37:13.796863] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.244 [2024-10-14 14:37:13.796869] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.244 [2024-10-14 14:37:13.796873] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.796876] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01480) on tqpair=0x1ba1620 00:23:33.244 [2024-10-14 14:37:13.796882] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:33.244 [2024-10-14 14:37:13.796891] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.796895] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.796898] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba1620) 00:23:33.244 [2024-10-14 14:37:13.796905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.244 [2024-10-14 14:37:13.796915] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01480, cid 0, qid 0 00:23:33.244 [2024-10-14 14:37:13.797099] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.244 [2024-10-14 14:37:13.797106] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.244 [2024-10-14 14:37:13.797109] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.797113] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01480) on tqpair=0x1ba1620 00:23:33.244 [2024-10-14 14:37:13.797118] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:33.244 [2024-10-14 14:37:13.797123] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:33.244 [2024-10-14 14:37:13.797131] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:33.244 [2024-10-14 14:37:13.797236] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:33.244 [2024-10-14 14:37:13.797241] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:33.244 [2024-10-14 14:37:13.797250] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.797254] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.797257] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba1620) 00:23:33.244 [2024-10-14 14:37:13.797264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.244 [2024-10-14 14:37:13.797275] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01480, cid 0, qid 0 00:23:33.244 [2024-10-14 14:37:13.797468] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.244 [2024-10-14 14:37:13.797474] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.244 [2024-10-14 14:37:13.797478] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.797482] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01480) on tqpair=0x1ba1620 00:23:33.244 [2024-10-14 14:37:13.797486] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:33.244 [2024-10-14 14:37:13.797495] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.797499] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.244 [2024-10-14 14:37:13.797503] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba1620) 00:23:33.245 [2024-10-14 14:37:13.797509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.245 [2024-10-14 14:37:13.797519] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01480, cid 0, qid 0 00:23:33.245 [2024-10-14 14:37:13.797682] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.245 [2024-10-14 14:37:13.797689] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.245 [2024-10-14 14:37:13.797692] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.797696] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01480) on tqpair=0x1ba1620 00:23:33.245 [2024-10-14 14:37:13.797701] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:33.245 [2024-10-14 14:37:13.797705] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:33.245 [2024-10-14 14:37:13.797713] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:33.245 [2024-10-14 14:37:13.797721] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:33.245 [2024-10-14 14:37:13.797732] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.797736] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba1620) 00:23:33.245 [2024-10-14 14:37:13.797743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.245 [2024-10-14 14:37:13.797753] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01480, cid 0, qid 0 00:23:33.245 [2024-10-14 14:37:13.797978] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:33.245 [2024-10-14 14:37:13.797985] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:33.245 [2024-10-14 14:37:13.797988] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.797992] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ba1620): datao=0, datal=4096, cccid=0 00:23:33.245 [2024-10-14 14:37:13.797997] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01480) on tqpair(0x1ba1620): expected_datao=0, payload_size=4096 00:23:33.245 [2024-10-14 14:37:13.798002] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798018] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798022] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798181] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.245 [2024-10-14 14:37:13.798188] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.245 [2024-10-14 14:37:13.798191] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798195] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01480) on tqpair=0x1ba1620 00:23:33.245 [2024-10-14 14:37:13.798203] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:33.245 [2024-10-14 14:37:13.798208] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:33.245 [2024-10-14 14:37:13.798212] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:33.245 [2024-10-14 14:37:13.798217] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:33.245 [2024-10-14 14:37:13.798222] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:33.245 [2024-10-14 14:37:13.798227] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:33.245 [2024-10-14 14:37:13.798235] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:33.245 [2024-10-14 14:37:13.798242] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798246] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798249] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba1620) 00:23:33.245 [2024-10-14 14:37:13.798257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:33.245 [2024-10-14 14:37:13.798268] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01480, cid 0, qid 0 00:23:33.245 [2024-10-14 14:37:13.798476] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.245 [2024-10-14 14:37:13.798482] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.245 [2024-10-14 14:37:13.798486] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798490] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01480) on tqpair=0x1ba1620 00:23:33.245 [2024-10-14 14:37:13.798499] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798505] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798509] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ba1620) 00:23:33.245 [2024-10-14 14:37:13.798515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.245 [2024-10-14 14:37:13.798522] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798525] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798529] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ba1620) 00:23:33.245 [2024-10-14 14:37:13.798535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.245 [2024-10-14 14:37:13.798541] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798545] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798548] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ba1620) 00:23:33.245 [2024-10-14 14:37:13.798554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.245 [2024-10-14 14:37:13.798560] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798564] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba1620) 00:23:33.245 [2024-10-14 14:37:13.798573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.245 [2024-10-14 14:37:13.798578] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:33.245 [2024-10-14 14:37:13.798588] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:33.245 [2024-10-14 14:37:13.798595] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798599] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ba1620) 00:23:33.245 [2024-10-14 14:37:13.798605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.245 [2024-10-14 14:37:13.798617] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01480, cid 0, qid 0 00:23:33.245 [2024-10-14 14:37:13.798622] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01600, cid 1, qid 0 00:23:33.245 [2024-10-14 14:37:13.798627] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01780, cid 2, qid 0 00:23:33.245 [2024-10-14 14:37:13.798632] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01900, cid 3, qid 0 00:23:33.245 [2024-10-14 14:37:13.798637] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01a80, cid 4, qid 0 00:23:33.245 [2024-10-14 14:37:13.798848] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.245 [2024-10-14 14:37:13.798855] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.245 [2024-10-14 14:37:13.798858] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798862] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01a80) on tqpair=0x1ba1620 00:23:33.245 [2024-10-14 14:37:13.798867] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:33.245 [2024-10-14 14:37:13.798873] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:33.245 [2024-10-14 14:37:13.798883] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.798887] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ba1620) 00:23:33.245 [2024-10-14 14:37:13.798896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.245 [2024-10-14 14:37:13.798906] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01a80, cid 4, qid 0 00:23:33.245 [2024-10-14 14:37:13.799101] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:33.245 [2024-10-14 14:37:13.799109] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:33.245 [2024-10-14 14:37:13.799112] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.799116] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ba1620): datao=0, datal=4096, cccid=4 00:23:33.245 [2024-10-14 14:37:13.799120] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01a80) on tqpair(0x1ba1620): expected_datao=0, payload_size=4096 00:23:33.245 [2024-10-14 14:37:13.799125] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.799138] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.799142] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.840240] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.245 [2024-10-14 14:37:13.840250] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.245 [2024-10-14 14:37:13.840254] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.840258] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01a80) on tqpair=0x1ba1620 00:23:33.245 [2024-10-14 14:37:13.840271] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:33.245 [2024-10-14 14:37:13.840296] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.840300] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ba1620) 00:23:33.245 [2024-10-14 14:37:13.840307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.245 [2024-10-14 14:37:13.840315] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.840318] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.840322] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ba1620) 00:23:33.245 [2024-10-14 14:37:13.840328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.245 [2024-10-14 14:37:13.840341] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01a80, cid 4, qid 0 00:23:33.245 [2024-10-14 14:37:13.840346] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01c00, cid 5, qid 0 00:23:33.245 [2024-10-14 14:37:13.840630] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:33.245 [2024-10-14 14:37:13.840636] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:33.245 [2024-10-14 14:37:13.840640] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:33.245 [2024-10-14 14:37:13.840643] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ba1620): datao=0, datal=1024, cccid=4 00:23:33.246 [2024-10-14 14:37:13.840648] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01a80) on tqpair(0x1ba1620): expected_datao=0, payload_size=1024 00:23:33.246 [2024-10-14 14:37:13.840652] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.840659] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.840662] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.840668] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.246 [2024-10-14 14:37:13.840674] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.246 [2024-10-14 14:37:13.840677] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.840681] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01c00) on tqpair=0x1ba1620 00:23:33.246 [2024-10-14 14:37:13.885070] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.246 [2024-10-14 14:37:13.885079] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.246 [2024-10-14 14:37:13.885083] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.885087] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01a80) on tqpair=0x1ba1620 00:23:33.246 [2024-10-14 14:37:13.885101] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.885105] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ba1620) 00:23:33.246 [2024-10-14 14:37:13.885112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.246 [2024-10-14 14:37:13.885127] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01a80, cid 4, qid 0 00:23:33.246 [2024-10-14 14:37:13.885311] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:33.246 [2024-10-14 14:37:13.885317] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:33.246 [2024-10-14 14:37:13.885321] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.885324] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ba1620): datao=0, datal=3072, cccid=4 00:23:33.246 [2024-10-14 14:37:13.885329] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01a80) on tqpair(0x1ba1620): expected_datao=0, payload_size=3072 00:23:33.246 [2024-10-14 14:37:13.885333] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.885350] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.885355] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.885514] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.246 [2024-10-14 14:37:13.885520] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.246 [2024-10-14 14:37:13.885524] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.885528] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01a80) on tqpair=0x1ba1620 00:23:33.246 [2024-10-14 14:37:13.885536] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.885540] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ba1620) 00:23:33.246 [2024-10-14 14:37:13.885546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.246 [2024-10-14 14:37:13.885560] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01a80, cid 4, qid 0 00:23:33.246 [2024-10-14 14:37:13.885810] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:33.246 [2024-10-14 14:37:13.885816] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:33.246 [2024-10-14 14:37:13.885820] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.885823] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ba1620): datao=0, datal=8, cccid=4 00:23:33.246 [2024-10-14 14:37:13.885828] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c01a80) on tqpair(0x1ba1620): expected_datao=0, payload_size=8 00:23:33.246 [2024-10-14 14:37:13.885832] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.885839] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.885842] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.927221] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.246 [2024-10-14 14:37:13.927231] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.246 [2024-10-14 14:37:13.927235] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.246 [2024-10-14 14:37:13.927239] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01a80) on tqpair=0x1ba1620 00:23:33.246 ===================================================== 00:23:33.246 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:33.246 ===================================================== 00:23:33.246 Controller Capabilities/Features 00:23:33.246 ================================ 00:23:33.246 Vendor ID: 0000 00:23:33.246 Subsystem Vendor ID: 0000 00:23:33.246 Serial Number: .................... 00:23:33.246 Model Number: ........................................ 00:23:33.246 Firmware Version: 25.01 00:23:33.246 Recommended Arb Burst: 0 00:23:33.246 IEEE OUI Identifier: 00 00 00 00:23:33.246 Multi-path I/O 00:23:33.246 May have multiple subsystem ports: No 00:23:33.246 May have multiple controllers: No 00:23:33.246 Associated with SR-IOV VF: No 00:23:33.246 Max Data Transfer Size: 131072 00:23:33.246 Max Number of Namespaces: 0 00:23:33.246 Max Number of I/O Queues: 1024 00:23:33.246 NVMe Specification Version (VS): 1.3 00:23:33.246 NVMe Specification Version (Identify): 1.3 00:23:33.246 Maximum Queue Entries: 128 00:23:33.246 Contiguous Queues Required: Yes 00:23:33.246 Arbitration Mechanisms Supported 00:23:33.246 Weighted Round Robin: Not Supported 00:23:33.246 Vendor Specific: Not Supported 00:23:33.246 Reset Timeout: 15000 ms 00:23:33.246 Doorbell Stride: 4 bytes 00:23:33.246 NVM Subsystem Reset: Not Supported 00:23:33.246 Command Sets Supported 00:23:33.246 NVM Command Set: Supported 00:23:33.246 Boot Partition: Not Supported 00:23:33.246 Memory Page Size Minimum: 4096 bytes 00:23:33.246 Memory Page Size Maximum: 4096 bytes 00:23:33.246 Persistent Memory Region: Not Supported 00:23:33.246 Optional Asynchronous Events Supported 00:23:33.246 Namespace Attribute Notices: Not Supported 00:23:33.246 Firmware Activation Notices: Not Supported 00:23:33.246 ANA Change Notices: Not Supported 00:23:33.246 PLE Aggregate Log Change Notices: Not Supported 00:23:33.246 LBA Status Info Alert Notices: Not Supported 00:23:33.246 EGE Aggregate Log Change Notices: Not Supported 00:23:33.246 Normal NVM Subsystem Shutdown event: Not Supported 00:23:33.246 Zone Descriptor Change Notices: Not Supported 00:23:33.246 Discovery Log Change Notices: Supported 00:23:33.246 Controller Attributes 00:23:33.246 128-bit Host Identifier: Not Supported 00:23:33.246 Non-Operational Permissive Mode: Not Supported 00:23:33.246 NVM Sets: Not Supported 00:23:33.246 Read Recovery Levels: Not Supported 00:23:33.246 Endurance Groups: Not Supported 00:23:33.246 Predictable Latency Mode: Not Supported 00:23:33.246 Traffic Based Keep ALive: Not Supported 00:23:33.246 Namespace Granularity: Not Supported 00:23:33.246 SQ Associations: Not Supported 00:23:33.246 UUID List: Not Supported 00:23:33.246 Multi-Domain Subsystem: Not Supported 00:23:33.246 Fixed Capacity Management: Not Supported 00:23:33.246 Variable Capacity Management: Not Supported 00:23:33.246 Delete Endurance Group: Not Supported 00:23:33.246 Delete NVM Set: Not Supported 00:23:33.246 Extended LBA Formats Supported: Not Supported 00:23:33.246 Flexible Data Placement Supported: Not Supported 00:23:33.246 00:23:33.246 Controller Memory Buffer Support 00:23:33.246 ================================ 00:23:33.246 Supported: No 00:23:33.246 00:23:33.246 Persistent Memory Region Support 00:23:33.246 ================================ 00:23:33.246 Supported: No 00:23:33.246 00:23:33.246 Admin Command Set Attributes 00:23:33.246 ============================ 00:23:33.246 Security Send/Receive: Not Supported 00:23:33.246 Format NVM: Not Supported 00:23:33.246 Firmware Activate/Download: Not Supported 00:23:33.246 Namespace Management: Not Supported 00:23:33.246 Device Self-Test: Not Supported 00:23:33.246 Directives: Not Supported 00:23:33.246 NVMe-MI: Not Supported 00:23:33.246 Virtualization Management: Not Supported 00:23:33.246 Doorbell Buffer Config: Not Supported 00:23:33.246 Get LBA Status Capability: Not Supported 00:23:33.246 Command & Feature Lockdown Capability: Not Supported 00:23:33.246 Abort Command Limit: 1 00:23:33.246 Async Event Request Limit: 4 00:23:33.246 Number of Firmware Slots: N/A 00:23:33.246 Firmware Slot 1 Read-Only: N/A 00:23:33.246 Firmware Activation Without Reset: N/A 00:23:33.246 Multiple Update Detection Support: N/A 00:23:33.246 Firmware Update Granularity: No Information Provided 00:23:33.246 Per-Namespace SMART Log: No 00:23:33.246 Asymmetric Namespace Access Log Page: Not Supported 00:23:33.246 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:33.246 Command Effects Log Page: Not Supported 00:23:33.246 Get Log Page Extended Data: Supported 00:23:33.246 Telemetry Log Pages: Not Supported 00:23:33.246 Persistent Event Log Pages: Not Supported 00:23:33.246 Supported Log Pages Log Page: May Support 00:23:33.246 Commands Supported & Effects Log Page: Not Supported 00:23:33.246 Feature Identifiers & Effects Log Page:May Support 00:23:33.246 NVMe-MI Commands & Effects Log Page: May Support 00:23:33.246 Data Area 4 for Telemetry Log: Not Supported 00:23:33.246 Error Log Page Entries Supported: 128 00:23:33.246 Keep Alive: Not Supported 00:23:33.246 00:23:33.246 NVM Command Set Attributes 00:23:33.246 ========================== 00:23:33.246 Submission Queue Entry Size 00:23:33.246 Max: 1 00:23:33.246 Min: 1 00:23:33.246 Completion Queue Entry Size 00:23:33.246 Max: 1 00:23:33.246 Min: 1 00:23:33.246 Number of Namespaces: 0 00:23:33.246 Compare Command: Not Supported 00:23:33.246 Write Uncorrectable Command: Not Supported 00:23:33.246 Dataset Management Command: Not Supported 00:23:33.246 Write Zeroes Command: Not Supported 00:23:33.246 Set Features Save Field: Not Supported 00:23:33.246 Reservations: Not Supported 00:23:33.246 Timestamp: Not Supported 00:23:33.247 Copy: Not Supported 00:23:33.247 Volatile Write Cache: Not Present 00:23:33.247 Atomic Write Unit (Normal): 1 00:23:33.247 Atomic Write Unit (PFail): 1 00:23:33.247 Atomic Compare & Write Unit: 1 00:23:33.247 Fused Compare & Write: Supported 00:23:33.247 Scatter-Gather List 00:23:33.247 SGL Command Set: Supported 00:23:33.247 SGL Keyed: Supported 00:23:33.247 SGL Bit Bucket Descriptor: Not Supported 00:23:33.247 SGL Metadata Pointer: Not Supported 00:23:33.247 Oversized SGL: Not Supported 00:23:33.247 SGL Metadata Address: Not Supported 00:23:33.247 SGL Offset: Supported 00:23:33.247 Transport SGL Data Block: Not Supported 00:23:33.247 Replay Protected Memory Block: Not Supported 00:23:33.247 00:23:33.247 Firmware Slot Information 00:23:33.247 ========================= 00:23:33.247 Active slot: 0 00:23:33.247 00:23:33.247 00:23:33.247 Error Log 00:23:33.247 ========= 00:23:33.247 00:23:33.247 Active Namespaces 00:23:33.247 ================= 00:23:33.247 Discovery Log Page 00:23:33.247 ================== 00:23:33.247 Generation Counter: 2 00:23:33.247 Number of Records: 2 00:23:33.247 Record Format: 0 00:23:33.247 00:23:33.247 Discovery Log Entry 0 00:23:33.247 ---------------------- 00:23:33.247 Transport Type: 3 (TCP) 00:23:33.247 Address Family: 1 (IPv4) 00:23:33.247 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:33.247 Entry Flags: 00:23:33.247 Duplicate Returned Information: 1 00:23:33.247 Explicit Persistent Connection Support for Discovery: 1 00:23:33.247 Transport Requirements: 00:23:33.247 Secure Channel: Not Required 00:23:33.247 Port ID: 0 (0x0000) 00:23:33.247 Controller ID: 65535 (0xffff) 00:23:33.247 Admin Max SQ Size: 128 00:23:33.247 Transport Service Identifier: 4420 00:23:33.247 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:33.247 Transport Address: 10.0.0.2 00:23:33.247 Discovery Log Entry 1 00:23:33.247 ---------------------- 00:23:33.247 Transport Type: 3 (TCP) 00:23:33.247 Address Family: 1 (IPv4) 00:23:33.247 Subsystem Type: 2 (NVM Subsystem) 00:23:33.247 Entry Flags: 00:23:33.247 Duplicate Returned Information: 0 00:23:33.247 Explicit Persistent Connection Support for Discovery: 0 00:23:33.247 Transport Requirements: 00:23:33.247 Secure Channel: Not Required 00:23:33.247 Port ID: 0 (0x0000) 00:23:33.247 Controller ID: 65535 (0xffff) 00:23:33.247 Admin Max SQ Size: 128 00:23:33.247 Transport Service Identifier: 4420 00:23:33.247 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:33.247 Transport Address: 10.0.0.2 [2024-10-14 14:37:13.927327] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:33.247 [2024-10-14 14:37:13.927340] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01480) on tqpair=0x1ba1620 00:23:33.247 [2024-10-14 14:37:13.927346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.247 [2024-10-14 14:37:13.927352] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01600) on tqpair=0x1ba1620 00:23:33.247 [2024-10-14 14:37:13.927357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.247 [2024-10-14 14:37:13.927362] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01780) on tqpair=0x1ba1620 00:23:33.247 [2024-10-14 14:37:13.927366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.247 [2024-10-14 14:37:13.927371] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01900) on tqpair=0x1ba1620 00:23:33.247 [2024-10-14 14:37:13.927376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.247 [2024-10-14 14:37:13.927384] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.247 [2024-10-14 14:37:13.927388] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.247 [2024-10-14 14:37:13.927392] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba1620) 00:23:33.247 [2024-10-14 14:37:13.927399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.247 [2024-10-14 14:37:13.927413] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01900, cid 3, qid 0 00:23:33.247 [2024-10-14 14:37:13.927670] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.247 [2024-10-14 14:37:13.927677] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.247 [2024-10-14 14:37:13.927680] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.247 [2024-10-14 14:37:13.927684] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01900) on tqpair=0x1ba1620 00:23:33.247 [2024-10-14 14:37:13.927691] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.247 [2024-10-14 14:37:13.927695] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.247 [2024-10-14 14:37:13.927698] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba1620) 00:23:33.247 [2024-10-14 14:37:13.927705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.247 [2024-10-14 14:37:13.927718] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01900, cid 3, qid 0 00:23:33.247 [2024-10-14 14:37:13.927906] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.247 [2024-10-14 14:37:13.927912] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.247 [2024-10-14 14:37:13.927916] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.247 [2024-10-14 14:37:13.927920] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01900) on tqpair=0x1ba1620 00:23:33.247 [2024-10-14 14:37:13.927925] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:33.247 [2024-10-14 14:37:13.927932] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:33.247 [2024-10-14 14:37:13.927941] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.247 [2024-10-14 14:37:13.927945] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.247 [2024-10-14 14:37:13.927948] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba1620) 00:23:33.247 [2024-10-14 14:37:13.927955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.247 [2024-10-14 14:37:13.927965] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01900, cid 3, qid 0 00:23:33.247 [2024-10-14 14:37:13.932070] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.247 [2024-10-14 14:37:13.932079] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.247 [2024-10-14 14:37:13.932082] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.247 [2024-10-14 14:37:13.932086] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01900) on tqpair=0x1ba1620 00:23:33.247 [2024-10-14 14:37:13.932098] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.247 [2024-10-14 14:37:13.932102] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.247 [2024-10-14 14:37:13.932105] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ba1620) 00:23:33.247 [2024-10-14 14:37:13.932112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.247 [2024-10-14 14:37:13.932123] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c01900, cid 3, qid 0 00:23:33.247 [2024-10-14 14:37:13.932298] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.247 [2024-10-14 14:37:13.932304] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.247 [2024-10-14 14:37:13.932308] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.247 [2024-10-14 14:37:13.932311] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c01900) on tqpair=0x1ba1620 00:23:33.247 [2024-10-14 14:37:13.932319] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:23:33.247 00:23:33.247 14:37:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:33.511 [2024-10-14 14:37:13.977642] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:23:33.512 [2024-10-14 14:37:13.977712] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3481584 ] 00:23:33.512 [2024-10-14 14:37:14.011623] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:33.512 [2024-10-14 14:37:14.011666] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:33.512 [2024-10-14 14:37:14.011671] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:33.512 [2024-10-14 14:37:14.011682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:33.512 [2024-10-14 14:37:14.011690] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:33.512 [2024-10-14 14:37:14.012270] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:33.512 [2024-10-14 14:37:14.012299] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ebe620 0 00:23:33.512 [2024-10-14 14:37:14.023071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:33.512 [2024-10-14 14:37:14.023083] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:33.512 [2024-10-14 14:37:14.023087] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:33.512 [2024-10-14 14:37:14.023091] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:33.512 [2024-10-14 14:37:14.023117] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.023123] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.023127] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ebe620) 00:23:33.512 [2024-10-14 14:37:14.023142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:33.512 [2024-10-14 14:37:14.023159] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e480, cid 0, qid 0 00:23:33.512 [2024-10-14 14:37:14.031070] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.512 [2024-10-14 14:37:14.031080] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.512 [2024-10-14 14:37:14.031084] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.031088] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e480) on tqpair=0x1ebe620 00:23:33.512 [2024-10-14 14:37:14.031100] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:33.512 [2024-10-14 14:37:14.031106] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:33.512 [2024-10-14 14:37:14.031112] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:33.512 [2024-10-14 14:37:14.031123] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.031127] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.031131] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ebe620) 00:23:33.512 [2024-10-14 14:37:14.031139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.512 [2024-10-14 14:37:14.031152] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e480, cid 0, qid 0 00:23:33.512 [2024-10-14 14:37:14.031329] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.512 [2024-10-14 14:37:14.031335] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.512 [2024-10-14 14:37:14.031339] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.031343] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e480) on tqpair=0x1ebe620 00:23:33.512 [2024-10-14 14:37:14.031348] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:33.512 [2024-10-14 14:37:14.031355] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:33.512 [2024-10-14 14:37:14.031362] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.031366] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.031369] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ebe620) 00:23:33.512 [2024-10-14 14:37:14.031376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.512 [2024-10-14 14:37:14.031386] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e480, cid 0, qid 0 00:23:33.512 [2024-10-14 14:37:14.031581] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.512 [2024-10-14 14:37:14.031587] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.512 [2024-10-14 14:37:14.031591] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.031595] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e480) on tqpair=0x1ebe620 00:23:33.512 [2024-10-14 14:37:14.031600] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:33.512 [2024-10-14 14:37:14.031607] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:33.512 [2024-10-14 14:37:14.031614] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.031618] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.031621] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ebe620) 00:23:33.512 [2024-10-14 14:37:14.031628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.512 [2024-10-14 14:37:14.031641] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e480, cid 0, qid 0 00:23:33.512 [2024-10-14 14:37:14.031846] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.512 [2024-10-14 14:37:14.031853] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.512 [2024-10-14 14:37:14.031856] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.031860] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e480) on tqpair=0x1ebe620 00:23:33.512 [2024-10-14 14:37:14.031865] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:33.512 [2024-10-14 14:37:14.031874] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.031878] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.031882] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ebe620) 00:23:33.512 [2024-10-14 14:37:14.031888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.512 [2024-10-14 14:37:14.031898] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e480, cid 0, qid 0 00:23:33.512 [2024-10-14 14:37:14.032116] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.512 [2024-10-14 14:37:14.032123] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.512 [2024-10-14 14:37:14.032126] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.032130] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e480) on tqpair=0x1ebe620 00:23:33.512 [2024-10-14 14:37:14.032134] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:33.512 [2024-10-14 14:37:14.032139] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:33.512 [2024-10-14 14:37:14.032146] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:33.512 [2024-10-14 14:37:14.032252] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:33.512 [2024-10-14 14:37:14.032256] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:33.512 [2024-10-14 14:37:14.032263] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.032267] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.032271] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ebe620) 00:23:33.512 [2024-10-14 14:37:14.032277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.512 [2024-10-14 14:37:14.032288] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e480, cid 0, qid 0 00:23:33.512 [2024-10-14 14:37:14.032446] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.512 [2024-10-14 14:37:14.032452] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.512 [2024-10-14 14:37:14.032456] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.032459] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e480) on tqpair=0x1ebe620 00:23:33.512 [2024-10-14 14:37:14.032464] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:33.512 [2024-10-14 14:37:14.032473] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.032477] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.032481] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ebe620) 00:23:33.512 [2024-10-14 14:37:14.032487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.512 [2024-10-14 14:37:14.032499] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e480, cid 0, qid 0 00:23:33.512 [2024-10-14 14:37:14.032686] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.512 [2024-10-14 14:37:14.032692] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.512 [2024-10-14 14:37:14.032695] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.512 [2024-10-14 14:37:14.032699] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e480) on tqpair=0x1ebe620 00:23:33.512 [2024-10-14 14:37:14.032703] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:33.512 [2024-10-14 14:37:14.032708] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:33.512 [2024-10-14 14:37:14.032716] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:33.513 [2024-10-14 14:37:14.032724] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:33.513 [2024-10-14 14:37:14.032732] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.032736] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ebe620) 00:23:33.513 [2024-10-14 14:37:14.032743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.513 [2024-10-14 14:37:14.032753] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e480, cid 0, qid 0 00:23:33.513 [2024-10-14 14:37:14.032959] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:33.513 [2024-10-14 14:37:14.032966] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:33.513 [2024-10-14 14:37:14.032969] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.032973] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ebe620): datao=0, datal=4096, cccid=0 00:23:33.513 [2024-10-14 14:37:14.032978] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f1e480) on tqpair(0x1ebe620): expected_datao=0, payload_size=4096 00:23:33.513 [2024-10-14 14:37:14.032982] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.032994] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.032998] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074226] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.513 [2024-10-14 14:37:14.074235] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.513 [2024-10-14 14:37:14.074239] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074243] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e480) on tqpair=0x1ebe620 00:23:33.513 [2024-10-14 14:37:14.074250] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:33.513 [2024-10-14 14:37:14.074255] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:33.513 [2024-10-14 14:37:14.074259] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:33.513 [2024-10-14 14:37:14.074263] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:33.513 [2024-10-14 14:37:14.074268] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:33.513 [2024-10-14 14:37:14.074272] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:33.513 [2024-10-14 14:37:14.074280] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:33.513 [2024-10-14 14:37:14.074290] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074294] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074298] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ebe620) 00:23:33.513 [2024-10-14 14:37:14.074305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:33.513 [2024-10-14 14:37:14.074317] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e480, cid 0, qid 0 00:23:33.513 [2024-10-14 14:37:14.074522] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.513 [2024-10-14 14:37:14.074528] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.513 [2024-10-14 14:37:14.074532] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074536] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e480) on tqpair=0x1ebe620 00:23:33.513 [2024-10-14 14:37:14.074542] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074546] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074550] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ebe620) 00:23:33.513 [2024-10-14 14:37:14.074556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.513 [2024-10-14 14:37:14.074562] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074566] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074570] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ebe620) 00:23:33.513 [2024-10-14 14:37:14.074576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.513 [2024-10-14 14:37:14.074582] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074585] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074589] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ebe620) 00:23:33.513 [2024-10-14 14:37:14.074595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.513 [2024-10-14 14:37:14.074601] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074605] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074608] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.513 [2024-10-14 14:37:14.074614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.513 [2024-10-14 14:37:14.074619] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:33.513 [2024-10-14 14:37:14.074629] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:33.513 [2024-10-14 14:37:14.074635] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074639] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ebe620) 00:23:33.513 [2024-10-14 14:37:14.074646] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.513 [2024-10-14 14:37:14.074658] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e480, cid 0, qid 0 00:23:33.513 [2024-10-14 14:37:14.074663] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e600, cid 1, qid 0 00:23:33.513 [2024-10-14 14:37:14.074668] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e780, cid 2, qid 0 00:23:33.513 [2024-10-14 14:37:14.074673] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.513 [2024-10-14 14:37:14.074680] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ea80, cid 4, qid 0 00:23:33.513 [2024-10-14 14:37:14.074894] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.513 [2024-10-14 14:37:14.074900] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.513 [2024-10-14 14:37:14.074904] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074908] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ea80) on tqpair=0x1ebe620 00:23:33.513 [2024-10-14 14:37:14.074913] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:33.513 [2024-10-14 14:37:14.074918] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:33.513 [2024-10-14 14:37:14.074928] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:33.513 [2024-10-14 14:37:14.074936] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:33.513 [2024-10-14 14:37:14.074943] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074947] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.074950] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ebe620) 00:23:33.513 [2024-10-14 14:37:14.074957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:33.513 [2024-10-14 14:37:14.074967] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ea80, cid 4, qid 0 00:23:33.513 [2024-10-14 14:37:14.079069] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.513 [2024-10-14 14:37:14.079077] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.513 [2024-10-14 14:37:14.079081] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.079085] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ea80) on tqpair=0x1ebe620 00:23:33.513 [2024-10-14 14:37:14.079150] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:33.513 [2024-10-14 14:37:14.079159] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:33.513 [2024-10-14 14:37:14.079167] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.079171] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ebe620) 00:23:33.513 [2024-10-14 14:37:14.079177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.513 [2024-10-14 14:37:14.079189] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ea80, cid 4, qid 0 00:23:33.513 [2024-10-14 14:37:14.079357] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:33.513 [2024-10-14 14:37:14.079364] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:33.513 [2024-10-14 14:37:14.079367] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.079371] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ebe620): datao=0, datal=4096, cccid=4 00:23:33.513 [2024-10-14 14:37:14.079376] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f1ea80) on tqpair(0x1ebe620): expected_datao=0, payload_size=4096 00:23:33.513 [2024-10-14 14:37:14.079380] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.079387] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.079391] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.079585] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.513 [2024-10-14 14:37:14.079596] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.513 [2024-10-14 14:37:14.079599] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.079603] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ea80) on tqpair=0x1ebe620 00:23:33.513 [2024-10-14 14:37:14.079612] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:33.513 [2024-10-14 14:37:14.079625] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:33.513 [2024-10-14 14:37:14.079634] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:33.513 [2024-10-14 14:37:14.079641] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.513 [2024-10-14 14:37:14.079645] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ebe620) 00:23:33.513 [2024-10-14 14:37:14.079651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.514 [2024-10-14 14:37:14.079662] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ea80, cid 4, qid 0 00:23:33.514 [2024-10-14 14:37:14.079880] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:33.514 [2024-10-14 14:37:14.079887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:33.514 [2024-10-14 14:37:14.079890] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.079894] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ebe620): datao=0, datal=4096, cccid=4 00:23:33.514 [2024-10-14 14:37:14.079898] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f1ea80) on tqpair(0x1ebe620): expected_datao=0, payload_size=4096 00:23:33.514 [2024-10-14 14:37:14.079903] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.079922] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.079925] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.080071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.514 [2024-10-14 14:37:14.080078] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.514 [2024-10-14 14:37:14.080081] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.080085] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ea80) on tqpair=0x1ebe620 00:23:33.514 [2024-10-14 14:37:14.080097] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:33.514 [2024-10-14 14:37:14.080106] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:33.514 [2024-10-14 14:37:14.080113] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.080117] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ebe620) 00:23:33.514 [2024-10-14 14:37:14.080124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.514 [2024-10-14 14:37:14.080135] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ea80, cid 4, qid 0 00:23:33.514 [2024-10-14 14:37:14.080352] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:33.514 [2024-10-14 14:37:14.080358] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:33.514 [2024-10-14 14:37:14.080362] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.080365] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ebe620): datao=0, datal=4096, cccid=4 00:23:33.514 [2024-10-14 14:37:14.080370] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f1ea80) on tqpair(0x1ebe620): expected_datao=0, payload_size=4096 00:23:33.514 [2024-10-14 14:37:14.080376] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.080411] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.080415] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.080541] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.514 [2024-10-14 14:37:14.080547] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.514 [2024-10-14 14:37:14.080551] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.080555] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ea80) on tqpair=0x1ebe620 00:23:33.514 [2024-10-14 14:37:14.080562] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:33.514 [2024-10-14 14:37:14.080570] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:33.514 [2024-10-14 14:37:14.080578] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:33.514 [2024-10-14 14:37:14.080584] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:33.514 [2024-10-14 14:37:14.080589] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:33.514 [2024-10-14 14:37:14.080595] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:33.514 [2024-10-14 14:37:14.080600] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:33.514 [2024-10-14 14:37:14.080604] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:33.514 [2024-10-14 14:37:14.080610] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:33.514 [2024-10-14 14:37:14.080622] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.080626] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ebe620) 00:23:33.514 [2024-10-14 14:37:14.080633] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.514 [2024-10-14 14:37:14.080640] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.080643] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.080647] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ebe620) 00:23:33.514 [2024-10-14 14:37:14.080653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.514 [2024-10-14 14:37:14.080665] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ea80, cid 4, qid 0 00:23:33.514 [2024-10-14 14:37:14.080670] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ec00, cid 5, qid 0 00:23:33.514 [2024-10-14 14:37:14.080873] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.514 [2024-10-14 14:37:14.080879] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.514 [2024-10-14 14:37:14.080882] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.080886] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ea80) on tqpair=0x1ebe620 00:23:33.514 [2024-10-14 14:37:14.080893] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.514 [2024-10-14 14:37:14.080899] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.514 [2024-10-14 14:37:14.080902] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.080906] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ec00) on tqpair=0x1ebe620 00:23:33.514 [2024-10-14 14:37:14.080916] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.080924] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ebe620) 00:23:33.514 [2024-10-14 14:37:14.080930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.514 [2024-10-14 14:37:14.080940] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ec00, cid 5, qid 0 00:23:33.514 [2024-10-14 14:37:14.081118] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.514 [2024-10-14 14:37:14.081125] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.514 [2024-10-14 14:37:14.081128] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.081132] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ec00) on tqpair=0x1ebe620 00:23:33.514 [2024-10-14 14:37:14.081141] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.081145] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ebe620) 00:23:33.514 [2024-10-14 14:37:14.081151] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.514 [2024-10-14 14:37:14.081161] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ec00, cid 5, qid 0 00:23:33.514 [2024-10-14 14:37:14.081397] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.514 [2024-10-14 14:37:14.081404] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.514 [2024-10-14 14:37:14.081407] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.081411] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ec00) on tqpair=0x1ebe620 00:23:33.514 [2024-10-14 14:37:14.081420] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.081424] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ebe620) 00:23:33.514 [2024-10-14 14:37:14.081430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.514 [2024-10-14 14:37:14.081440] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ec00, cid 5, qid 0 00:23:33.514 [2024-10-14 14:37:14.081645] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.514 [2024-10-14 14:37:14.081651] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.514 [2024-10-14 14:37:14.081654] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.081658] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ec00) on tqpair=0x1ebe620 00:23:33.514 [2024-10-14 14:37:14.081672] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.081676] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ebe620) 00:23:33.514 [2024-10-14 14:37:14.081682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.514 [2024-10-14 14:37:14.081690] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.081693] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ebe620) 00:23:33.514 [2024-10-14 14:37:14.081699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.514 [2024-10-14 14:37:14.081707] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.081710] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ebe620) 00:23:33.514 [2024-10-14 14:37:14.081716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.514 [2024-10-14 14:37:14.081725] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.514 [2024-10-14 14:37:14.081731] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ebe620) 00:23:33.514 [2024-10-14 14:37:14.081737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.515 [2024-10-14 14:37:14.081749] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ec00, cid 5, qid 0 00:23:33.515 [2024-10-14 14:37:14.081754] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ea80, cid 4, qid 0 00:23:33.515 [2024-10-14 14:37:14.081759] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ed80, cid 6, qid 0 00:23:33.515 [2024-10-14 14:37:14.081764] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ef00, cid 7, qid 0 00:23:33.515 [2024-10-14 14:37:14.081966] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:33.515 [2024-10-14 14:37:14.081972] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:33.515 [2024-10-14 14:37:14.081976] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.081980] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ebe620): datao=0, datal=8192, cccid=5 00:23:33.515 [2024-10-14 14:37:14.081984] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f1ec00) on tqpair(0x1ebe620): expected_datao=0, payload_size=8192 00:23:33.515 [2024-10-14 14:37:14.081989] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082094] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082099] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082105] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:33.515 [2024-10-14 14:37:14.082111] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:33.515 [2024-10-14 14:37:14.082114] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082118] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ebe620): datao=0, datal=512, cccid=4 00:23:33.515 [2024-10-14 14:37:14.082122] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f1ea80) on tqpair(0x1ebe620): expected_datao=0, payload_size=512 00:23:33.515 [2024-10-14 14:37:14.082126] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082133] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082136] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:33.515 [2024-10-14 14:37:14.082148] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:33.515 [2024-10-14 14:37:14.082151] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082155] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ebe620): datao=0, datal=512, cccid=6 00:23:33.515 [2024-10-14 14:37:14.082159] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f1ed80) on tqpair(0x1ebe620): expected_datao=0, payload_size=512 00:23:33.515 [2024-10-14 14:37:14.082163] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082170] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082173] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082179] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:33.515 [2024-10-14 14:37:14.082184] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:33.515 [2024-10-14 14:37:14.082188] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082191] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ebe620): datao=0, datal=4096, cccid=7 00:23:33.515 [2024-10-14 14:37:14.082196] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f1ef00) on tqpair(0x1ebe620): expected_datao=0, payload_size=4096 00:23:33.515 [2024-10-14 14:37:14.082200] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082217] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082221] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082407] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.515 [2024-10-14 14:37:14.082414] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.515 [2024-10-14 14:37:14.082417] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082421] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ec00) on tqpair=0x1ebe620 00:23:33.515 [2024-10-14 14:37:14.082433] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.515 [2024-10-14 14:37:14.082439] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.515 [2024-10-14 14:37:14.082442] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082446] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ea80) on tqpair=0x1ebe620 00:23:33.515 [2024-10-14 14:37:14.082456] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.515 [2024-10-14 14:37:14.082462] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.515 [2024-10-14 14:37:14.082465] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082469] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ed80) on tqpair=0x1ebe620 00:23:33.515 [2024-10-14 14:37:14.082476] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.515 [2024-10-14 14:37:14.082482] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.515 [2024-10-14 14:37:14.082485] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.515 [2024-10-14 14:37:14.082489] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ef00) on tqpair=0x1ebe620 00:23:33.515 ===================================================== 00:23:33.515 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:33.515 ===================================================== 00:23:33.515 Controller Capabilities/Features 00:23:33.515 ================================ 00:23:33.515 Vendor ID: 8086 00:23:33.515 Subsystem Vendor ID: 8086 00:23:33.515 Serial Number: SPDK00000000000001 00:23:33.515 Model Number: SPDK bdev Controller 00:23:33.515 Firmware Version: 25.01 00:23:33.515 Recommended Arb Burst: 6 00:23:33.515 IEEE OUI Identifier: e4 d2 5c 00:23:33.515 Multi-path I/O 00:23:33.515 May have multiple subsystem ports: Yes 00:23:33.515 May have multiple controllers: Yes 00:23:33.515 Associated with SR-IOV VF: No 00:23:33.515 Max Data Transfer Size: 131072 00:23:33.515 Max Number of Namespaces: 32 00:23:33.515 Max Number of I/O Queues: 127 00:23:33.515 NVMe Specification Version (VS): 1.3 00:23:33.515 NVMe Specification Version (Identify): 1.3 00:23:33.515 Maximum Queue Entries: 128 00:23:33.515 Contiguous Queues Required: Yes 00:23:33.515 Arbitration Mechanisms Supported 00:23:33.515 Weighted Round Robin: Not Supported 00:23:33.515 Vendor Specific: Not Supported 00:23:33.515 Reset Timeout: 15000 ms 00:23:33.515 Doorbell Stride: 4 bytes 00:23:33.515 NVM Subsystem Reset: Not Supported 00:23:33.515 Command Sets Supported 00:23:33.515 NVM Command Set: Supported 00:23:33.515 Boot Partition: Not Supported 00:23:33.515 Memory Page Size Minimum: 4096 bytes 00:23:33.515 Memory Page Size Maximum: 4096 bytes 00:23:33.515 Persistent Memory Region: Not Supported 00:23:33.515 Optional Asynchronous Events Supported 00:23:33.515 Namespace Attribute Notices: Supported 00:23:33.515 Firmware Activation Notices: Not Supported 00:23:33.515 ANA Change Notices: Not Supported 00:23:33.515 PLE Aggregate Log Change Notices: Not Supported 00:23:33.515 LBA Status Info Alert Notices: Not Supported 00:23:33.515 EGE Aggregate Log Change Notices: Not Supported 00:23:33.515 Normal NVM Subsystem Shutdown event: Not Supported 00:23:33.515 Zone Descriptor Change Notices: Not Supported 00:23:33.515 Discovery Log Change Notices: Not Supported 00:23:33.515 Controller Attributes 00:23:33.515 128-bit Host Identifier: Supported 00:23:33.515 Non-Operational Permissive Mode: Not Supported 00:23:33.515 NVM Sets: Not Supported 00:23:33.515 Read Recovery Levels: Not Supported 00:23:33.515 Endurance Groups: Not Supported 00:23:33.515 Predictable Latency Mode: Not Supported 00:23:33.515 Traffic Based Keep ALive: Not Supported 00:23:33.515 Namespace Granularity: Not Supported 00:23:33.515 SQ Associations: Not Supported 00:23:33.515 UUID List: Not Supported 00:23:33.515 Multi-Domain Subsystem: Not Supported 00:23:33.515 Fixed Capacity Management: Not Supported 00:23:33.515 Variable Capacity Management: Not Supported 00:23:33.515 Delete Endurance Group: Not Supported 00:23:33.515 Delete NVM Set: Not Supported 00:23:33.515 Extended LBA Formats Supported: Not Supported 00:23:33.515 Flexible Data Placement Supported: Not Supported 00:23:33.515 00:23:33.515 Controller Memory Buffer Support 00:23:33.515 ================================ 00:23:33.515 Supported: No 00:23:33.515 00:23:33.515 Persistent Memory Region Support 00:23:33.515 ================================ 00:23:33.515 Supported: No 00:23:33.515 00:23:33.515 Admin Command Set Attributes 00:23:33.515 ============================ 00:23:33.516 Security Send/Receive: Not Supported 00:23:33.516 Format NVM: Not Supported 00:23:33.516 Firmware Activate/Download: Not Supported 00:23:33.516 Namespace Management: Not Supported 00:23:33.516 Device Self-Test: Not Supported 00:23:33.516 Directives: Not Supported 00:23:33.516 NVMe-MI: Not Supported 00:23:33.516 Virtualization Management: Not Supported 00:23:33.516 Doorbell Buffer Config: Not Supported 00:23:33.516 Get LBA Status Capability: Not Supported 00:23:33.516 Command & Feature Lockdown Capability: Not Supported 00:23:33.516 Abort Command Limit: 4 00:23:33.516 Async Event Request Limit: 4 00:23:33.516 Number of Firmware Slots: N/A 00:23:33.516 Firmware Slot 1 Read-Only: N/A 00:23:33.516 Firmware Activation Without Reset: N/A 00:23:33.516 Multiple Update Detection Support: N/A 00:23:33.516 Firmware Update Granularity: No Information Provided 00:23:33.516 Per-Namespace SMART Log: No 00:23:33.516 Asymmetric Namespace Access Log Page: Not Supported 00:23:33.516 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:33.516 Command Effects Log Page: Supported 00:23:33.516 Get Log Page Extended Data: Supported 00:23:33.516 Telemetry Log Pages: Not Supported 00:23:33.516 Persistent Event Log Pages: Not Supported 00:23:33.516 Supported Log Pages Log Page: May Support 00:23:33.516 Commands Supported & Effects Log Page: Not Supported 00:23:33.516 Feature Identifiers & Effects Log Page:May Support 00:23:33.516 NVMe-MI Commands & Effects Log Page: May Support 00:23:33.516 Data Area 4 for Telemetry Log: Not Supported 00:23:33.516 Error Log Page Entries Supported: 128 00:23:33.516 Keep Alive: Supported 00:23:33.516 Keep Alive Granularity: 10000 ms 00:23:33.516 00:23:33.516 NVM Command Set Attributes 00:23:33.516 ========================== 00:23:33.516 Submission Queue Entry Size 00:23:33.516 Max: 64 00:23:33.516 Min: 64 00:23:33.516 Completion Queue Entry Size 00:23:33.516 Max: 16 00:23:33.516 Min: 16 00:23:33.516 Number of Namespaces: 32 00:23:33.516 Compare Command: Supported 00:23:33.516 Write Uncorrectable Command: Not Supported 00:23:33.516 Dataset Management Command: Supported 00:23:33.516 Write Zeroes Command: Supported 00:23:33.516 Set Features Save Field: Not Supported 00:23:33.516 Reservations: Supported 00:23:33.516 Timestamp: Not Supported 00:23:33.516 Copy: Supported 00:23:33.516 Volatile Write Cache: Present 00:23:33.516 Atomic Write Unit (Normal): 1 00:23:33.516 Atomic Write Unit (PFail): 1 00:23:33.516 Atomic Compare & Write Unit: 1 00:23:33.516 Fused Compare & Write: Supported 00:23:33.516 Scatter-Gather List 00:23:33.516 SGL Command Set: Supported 00:23:33.516 SGL Keyed: Supported 00:23:33.516 SGL Bit Bucket Descriptor: Not Supported 00:23:33.516 SGL Metadata Pointer: Not Supported 00:23:33.516 Oversized SGL: Not Supported 00:23:33.516 SGL Metadata Address: Not Supported 00:23:33.516 SGL Offset: Supported 00:23:33.516 Transport SGL Data Block: Not Supported 00:23:33.516 Replay Protected Memory Block: Not Supported 00:23:33.516 00:23:33.516 Firmware Slot Information 00:23:33.516 ========================= 00:23:33.516 Active slot: 1 00:23:33.516 Slot 1 Firmware Revision: 25.01 00:23:33.516 00:23:33.516 00:23:33.516 Commands Supported and Effects 00:23:33.516 ============================== 00:23:33.516 Admin Commands 00:23:33.516 -------------- 00:23:33.516 Get Log Page (02h): Supported 00:23:33.516 Identify (06h): Supported 00:23:33.516 Abort (08h): Supported 00:23:33.516 Set Features (09h): Supported 00:23:33.516 Get Features (0Ah): Supported 00:23:33.516 Asynchronous Event Request (0Ch): Supported 00:23:33.516 Keep Alive (18h): Supported 00:23:33.516 I/O Commands 00:23:33.516 ------------ 00:23:33.516 Flush (00h): Supported LBA-Change 00:23:33.516 Write (01h): Supported LBA-Change 00:23:33.516 Read (02h): Supported 00:23:33.516 Compare (05h): Supported 00:23:33.516 Write Zeroes (08h): Supported LBA-Change 00:23:33.516 Dataset Management (09h): Supported LBA-Change 00:23:33.516 Copy (19h): Supported LBA-Change 00:23:33.516 00:23:33.516 Error Log 00:23:33.516 ========= 00:23:33.516 00:23:33.516 Arbitration 00:23:33.516 =========== 00:23:33.516 Arbitration Burst: 1 00:23:33.516 00:23:33.516 Power Management 00:23:33.516 ================ 00:23:33.516 Number of Power States: 1 00:23:33.516 Current Power State: Power State #0 00:23:33.516 Power State #0: 00:23:33.516 Max Power: 0.00 W 00:23:33.516 Non-Operational State: Operational 00:23:33.516 Entry Latency: Not Reported 00:23:33.516 Exit Latency: Not Reported 00:23:33.516 Relative Read Throughput: 0 00:23:33.516 Relative Read Latency: 0 00:23:33.516 Relative Write Throughput: 0 00:23:33.516 Relative Write Latency: 0 00:23:33.516 Idle Power: Not Reported 00:23:33.516 Active Power: Not Reported 00:23:33.516 Non-Operational Permissive Mode: Not Supported 00:23:33.516 00:23:33.516 Health Information 00:23:33.516 ================== 00:23:33.516 Critical Warnings: 00:23:33.516 Available Spare Space: OK 00:23:33.516 Temperature: OK 00:23:33.516 Device Reliability: OK 00:23:33.516 Read Only: No 00:23:33.516 Volatile Memory Backup: OK 00:23:33.516 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:33.516 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:33.516 Available Spare: 0% 00:23:33.516 Available Spare Threshold: 0% 00:23:33.516 Life Percentage Used:[2024-10-14 14:37:14.082585] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.516 [2024-10-14 14:37:14.082591] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ebe620) 00:23:33.516 [2024-10-14 14:37:14.082597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.516 [2024-10-14 14:37:14.082609] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1ef00, cid 7, qid 0 00:23:33.516 [2024-10-14 14:37:14.082800] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.516 [2024-10-14 14:37:14.082807] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.516 [2024-10-14 14:37:14.082810] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.516 [2024-10-14 14:37:14.082814] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1ef00) on tqpair=0x1ebe620 00:23:33.516 [2024-10-14 14:37:14.082845] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:33.516 [2024-10-14 14:37:14.082855] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e480) on tqpair=0x1ebe620 00:23:33.516 [2024-10-14 14:37:14.082861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.516 [2024-10-14 14:37:14.082866] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e600) on tqpair=0x1ebe620 00:23:33.516 [2024-10-14 14:37:14.082871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.516 [2024-10-14 14:37:14.082876] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e780) on tqpair=0x1ebe620 00:23:33.516 [2024-10-14 14:37:14.082880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.517 [2024-10-14 14:37:14.082885] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.517 [2024-10-14 14:37:14.082890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.517 [2024-10-14 14:37:14.082898] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.082903] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.082907] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.517 [2024-10-14 14:37:14.082914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.517 [2024-10-14 14:37:14.082926] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.517 [2024-10-14 14:37:14.087077] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.517 [2024-10-14 14:37:14.087086] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.517 [2024-10-14 14:37:14.087089] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.087093] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.517 [2024-10-14 14:37:14.087100] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.087104] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.087108] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.517 [2024-10-14 14:37:14.087115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.517 [2024-10-14 14:37:14.087129] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.517 [2024-10-14 14:37:14.087326] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.517 [2024-10-14 14:37:14.087333] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.517 [2024-10-14 14:37:14.087337] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.087341] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.517 [2024-10-14 14:37:14.087346] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:33.517 [2024-10-14 14:37:14.087350] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:33.517 [2024-10-14 14:37:14.087360] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.087364] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.087367] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.517 [2024-10-14 14:37:14.087374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.517 [2024-10-14 14:37:14.087384] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.517 [2024-10-14 14:37:14.087567] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.517 [2024-10-14 14:37:14.087573] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.517 [2024-10-14 14:37:14.087577] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.087581] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.517 [2024-10-14 14:37:14.087590] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.087594] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.087598] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.517 [2024-10-14 14:37:14.087605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.517 [2024-10-14 14:37:14.087614] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.517 [2024-10-14 14:37:14.087835] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.517 [2024-10-14 14:37:14.087841] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.517 [2024-10-14 14:37:14.087845] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.087849] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.517 [2024-10-14 14:37:14.087861] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.087865] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.087868] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.517 [2024-10-14 14:37:14.087875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.517 [2024-10-14 14:37:14.087885] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.517 [2024-10-14 14:37:14.088078] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.517 [2024-10-14 14:37:14.088085] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.517 [2024-10-14 14:37:14.088089] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.088093] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.517 [2024-10-14 14:37:14.088102] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.088106] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.088109] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.517 [2024-10-14 14:37:14.088116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.517 [2024-10-14 14:37:14.088126] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.517 [2024-10-14 14:37:14.088325] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.517 [2024-10-14 14:37:14.088331] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.517 [2024-10-14 14:37:14.088335] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.088339] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.517 [2024-10-14 14:37:14.088348] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.088352] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.088356] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.517 [2024-10-14 14:37:14.088362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.517 [2024-10-14 14:37:14.088372] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.517 [2024-10-14 14:37:14.088600] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.517 [2024-10-14 14:37:14.088606] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.517 [2024-10-14 14:37:14.088609] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.088613] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.517 [2024-10-14 14:37:14.088623] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.088627] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.088630] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.517 [2024-10-14 14:37:14.088637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.517 [2024-10-14 14:37:14.088647] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.517 [2024-10-14 14:37:14.088876] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.517 [2024-10-14 14:37:14.088882] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.517 [2024-10-14 14:37:14.088886] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.088890] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.517 [2024-10-14 14:37:14.088899] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.088905] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.088909] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.517 [2024-10-14 14:37:14.088915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.517 [2024-10-14 14:37:14.088925] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.517 [2024-10-14 14:37:14.089097] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.517 [2024-10-14 14:37:14.089103] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.517 [2024-10-14 14:37:14.089107] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.089111] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.517 [2024-10-14 14:37:14.089120] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.089124] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.089128] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.517 [2024-10-14 14:37:14.089135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.517 [2024-10-14 14:37:14.089145] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.517 [2024-10-14 14:37:14.089312] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.517 [2024-10-14 14:37:14.089319] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.517 [2024-10-14 14:37:14.089322] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.089326] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.517 [2024-10-14 14:37:14.089335] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.089339] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.517 [2024-10-14 14:37:14.089343] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.517 [2024-10-14 14:37:14.089350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.518 [2024-10-14 14:37:14.089359] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.518 [2024-10-14 14:37:14.089533] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.518 [2024-10-14 14:37:14.089539] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.518 [2024-10-14 14:37:14.089543] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.089547] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.518 [2024-10-14 14:37:14.089556] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.089560] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.089564] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.518 [2024-10-14 14:37:14.089570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.518 [2024-10-14 14:37:14.089580] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.518 [2024-10-14 14:37:14.089766] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.518 [2024-10-14 14:37:14.089772] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.518 [2024-10-14 14:37:14.089775] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.089779] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.518 [2024-10-14 14:37:14.089789] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.089793] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.089798] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.518 [2024-10-14 14:37:14.089805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.518 [2024-10-14 14:37:14.089815] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.518 [2024-10-14 14:37:14.090038] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.518 [2024-10-14 14:37:14.090045] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.518 [2024-10-14 14:37:14.090048] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.090052] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.518 [2024-10-14 14:37:14.090064] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.090069] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.090072] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.518 [2024-10-14 14:37:14.090079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.518 [2024-10-14 14:37:14.090089] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.518 [2024-10-14 14:37:14.090310] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.518 [2024-10-14 14:37:14.090316] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.518 [2024-10-14 14:37:14.090320] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.090323] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.518 [2024-10-14 14:37:14.090333] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.090337] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.090340] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.518 [2024-10-14 14:37:14.090347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.518 [2024-10-14 14:37:14.090357] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.518 [2024-10-14 14:37:14.090579] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.518 [2024-10-14 14:37:14.090586] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.518 [2024-10-14 14:37:14.090589] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.090593] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.518 [2024-10-14 14:37:14.090602] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.090606] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.090610] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.518 [2024-10-14 14:37:14.090617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.518 [2024-10-14 14:37:14.090626] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.518 [2024-10-14 14:37:14.090859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.518 [2024-10-14 14:37:14.090865] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.518 [2024-10-14 14:37:14.090869] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.090873] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.518 [2024-10-14 14:37:14.090882] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.090886] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.090889] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.518 [2024-10-14 14:37:14.090898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.518 [2024-10-14 14:37:14.090908] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.518 [2024-10-14 14:37:14.095071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.518 [2024-10-14 14:37:14.095079] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.518 [2024-10-14 14:37:14.095083] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.095087] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.518 [2024-10-14 14:37:14.095097] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.095100] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.095104] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ebe620) 00:23:33.518 [2024-10-14 14:37:14.095111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.518 [2024-10-14 14:37:14.095122] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f1e900, cid 3, qid 0 00:23:33.518 [2024-10-14 14:37:14.095301] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:33.518 [2024-10-14 14:37:14.095307] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:33.518 [2024-10-14 14:37:14.095311] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:33.518 [2024-10-14 14:37:14.095315] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f1e900) on tqpair=0x1ebe620 00:23:33.518 [2024-10-14 14:37:14.095322] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:23:33.518 0% 00:23:33.518 Data Units Read: 0 00:23:33.518 Data Units Written: 0 00:23:33.518 Host Read Commands: 0 00:23:33.518 Host Write Commands: 0 00:23:33.518 Controller Busy Time: 0 minutes 00:23:33.518 Power Cycles: 0 00:23:33.518 Power On Hours: 0 hours 00:23:33.518 Unsafe Shutdowns: 0 00:23:33.518 Unrecoverable Media Errors: 0 00:23:33.518 Lifetime Error Log Entries: 0 00:23:33.518 Warning Temperature Time: 0 minutes 00:23:33.518 Critical Temperature Time: 0 minutes 00:23:33.518 00:23:33.518 Number of Queues 00:23:33.518 ================ 00:23:33.518 Number of I/O Submission Queues: 127 00:23:33.518 Number of I/O Completion Queues: 127 00:23:33.518 00:23:33.518 Active Namespaces 00:23:33.518 ================= 00:23:33.518 Namespace ID:1 00:23:33.518 Error Recovery Timeout: Unlimited 00:23:33.518 Command Set Identifier: NVM (00h) 00:23:33.518 Deallocate: Supported 00:23:33.518 Deallocated/Unwritten Error: Not Supported 00:23:33.518 Deallocated Read Value: Unknown 00:23:33.518 Deallocate in Write Zeroes: Not Supported 00:23:33.518 Deallocated Guard Field: 0xFFFF 00:23:33.518 Flush: Supported 00:23:33.518 Reservation: Supported 00:23:33.518 Namespace Sharing Capabilities: Multiple Controllers 00:23:33.518 Size (in LBAs): 131072 (0GiB) 00:23:33.518 Capacity (in LBAs): 131072 (0GiB) 00:23:33.518 Utilization (in LBAs): 131072 (0GiB) 00:23:33.518 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:33.518 EUI64: ABCDEF0123456789 00:23:33.518 UUID: 8a1aa7e5-9834-45fd-8a67-fe1bc98f83a0 00:23:33.518 Thin Provisioning: Not Supported 00:23:33.518 Per-NS Atomic Units: Yes 00:23:33.518 Atomic Boundary Size (Normal): 0 00:23:33.518 Atomic Boundary Size (PFail): 0 00:23:33.518 Atomic Boundary Offset: 0 00:23:33.518 Maximum Single Source Range Length: 65535 00:23:33.518 Maximum Copy Length: 65535 00:23:33.518 Maximum Source Range Count: 1 00:23:33.518 NGUID/EUI64 Never Reused: No 00:23:33.518 Namespace Write Protected: No 00:23:33.518 Number of LBA Formats: 1 00:23:33.518 Current LBA Format: LBA Format #00 00:23:33.518 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:33.518 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.518 rmmod nvme_tcp 00:23:33.518 rmmod nvme_fabrics 00:23:33.518 rmmod nvme_keyring 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 3481294 ']' 00:23:33.518 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 3481294 00:23:33.519 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3481294 ']' 00:23:33.519 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3481294 00:23:33.519 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:33.519 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.519 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3481294 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3481294' 00:23:33.779 killing process with pid 3481294 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3481294 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3481294 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.779 14:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.904 14:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:35.904 00:23:35.904 real 0m11.356s 00:23:35.904 user 0m8.209s 00:23:35.904 sys 0m6.010s 00:23:35.904 14:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:35.904 14:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:35.904 ************************************ 00:23:35.904 END TEST nvmf_identify 00:23:35.904 ************************************ 00:23:35.904 14:37:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:35.904 14:37:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:35.904 14:37:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:35.904 14:37:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.904 ************************************ 00:23:35.904 START TEST nvmf_perf 00:23:35.904 ************************************ 00:23:35.904 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:36.165 * Looking for test storage... 00:23:36.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.165 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:36.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.165 --rc genhtml_branch_coverage=1 00:23:36.165 --rc genhtml_function_coverage=1 00:23:36.165 --rc genhtml_legend=1 00:23:36.166 --rc geninfo_all_blocks=1 00:23:36.166 --rc geninfo_unexecuted_blocks=1 00:23:36.166 00:23:36.166 ' 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:36.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.166 --rc genhtml_branch_coverage=1 00:23:36.166 --rc genhtml_function_coverage=1 00:23:36.166 --rc genhtml_legend=1 00:23:36.166 --rc geninfo_all_blocks=1 00:23:36.166 --rc geninfo_unexecuted_blocks=1 00:23:36.166 00:23:36.166 ' 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:36.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.166 --rc genhtml_branch_coverage=1 00:23:36.166 --rc genhtml_function_coverage=1 00:23:36.166 --rc genhtml_legend=1 00:23:36.166 --rc geninfo_all_blocks=1 00:23:36.166 --rc geninfo_unexecuted_blocks=1 00:23:36.166 00:23:36.166 ' 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:36.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.166 --rc genhtml_branch_coverage=1 00:23:36.166 --rc genhtml_function_coverage=1 00:23:36.166 --rc genhtml_legend=1 00:23:36.166 --rc geninfo_all_blocks=1 00:23:36.166 --rc geninfo_unexecuted_blocks=1 00:23:36.166 00:23:36.166 ' 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:36.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:36.166 14:37:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:44.308 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:44.308 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:44.308 Found net devices under 0000:31:00.0: cvl_0_0 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:44.308 Found net devices under 0000:31:00.1: cvl_0_1 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.308 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:44.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:23:44.309 00:23:44.309 --- 10.0.0.2 ping statistics --- 00:23:44.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.309 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:23:44.309 00:23:44.309 --- 10.0.0.1 ping statistics --- 00:23:44.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.309 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=3485995 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 3485995 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3485995 ']' 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:44.309 14:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:44.309 [2024-10-14 14:37:24.442744] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:23:44.309 [2024-10-14 14:37:24.442795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.309 [2024-10-14 14:37:24.511544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.309 [2024-10-14 14:37:24.547199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.309 [2024-10-14 14:37:24.547229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.309 [2024-10-14 14:37:24.547237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.309 [2024-10-14 14:37:24.547243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.309 [2024-10-14 14:37:24.547249] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.309 [2024-10-14 14:37:24.548921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.309 [2024-10-14 14:37:24.549034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.309 [2024-10-14 14:37:24.549190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.309 [2024-10-14 14:37:24.549330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.570 14:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:44.570 14:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:44.570 14:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:44.570 14:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:44.570 14:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:44.570 14:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.570 14:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:44.570 14:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:45.142 14:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:45.142 14:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:45.402 14:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:45.402 14:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:45.663 14:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:45.663 14:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:45.663 14:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:45.663 14:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:45.663 14:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:45.663 [2024-10-14 14:37:26.316129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.663 14:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:45.924 14:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:45.924 14:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:46.185 14:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:46.185 14:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:46.185 14:37:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.445 [2024-10-14 14:37:27.038879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.445 14:37:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:46.705 14:37:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:23:46.705 14:37:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:46.705 14:37:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:46.705 14:37:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:48.087 Initializing NVMe Controllers 00:23:48.087 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:23:48.087 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:23:48.087 Initialization complete. Launching workers. 00:23:48.087 ======================================================== 00:23:48.087 Latency(us) 00:23:48.087 Device Information : IOPS MiB/s Average min max 00:23:48.087 PCIE (0000:65:00.0) NSID 1 from core 0: 78838.54 307.96 405.47 13.26 4718.94 00:23:48.087 ======================================================== 00:23:48.087 Total : 78838.54 307.96 405.47 13.26 4718.94 00:23:48.087 00:23:48.087 14:37:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:49.029 Initializing NVMe Controllers 00:23:49.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:49.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:49.029 Initialization complete. Launching workers. 00:23:49.029 ======================================================== 00:23:49.029 Latency(us) 00:23:49.029 Device Information : IOPS MiB/s Average min max 00:23:49.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 65.00 0.25 15388.55 224.77 45630.57 00:23:49.029 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 78.00 0.30 13096.83 7950.20 47889.21 00:23:49.029 ======================================================== 00:23:49.029 Total : 143.00 0.56 14138.52 224.77 47889.21 00:23:49.029 00:23:49.029 14:37:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:50.414 Initializing NVMe Controllers 00:23:50.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:50.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:50.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:50.414 Initialization complete. Launching workers. 00:23:50.414 ======================================================== 00:23:50.414 Latency(us) 00:23:50.414 Device Information : IOPS MiB/s Average min max 00:23:50.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10450.89 40.82 3061.86 508.70 7321.03 00:23:50.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3791.08 14.81 8496.42 5679.34 47801.05 00:23:50.414 ======================================================== 00:23:50.414 Total : 14241.97 55.63 4508.49 508.70 47801.05 00:23:50.414 00:23:50.414 14:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:50.414 14:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:50.414 14:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:52.962 Initializing NVMe Controllers 00:23:52.962 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:52.962 Controller IO queue size 128, less than required. 00:23:52.962 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:52.962 Controller IO queue size 128, less than required. 00:23:52.962 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:52.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:52.962 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:52.962 Initialization complete. Launching workers. 00:23:52.962 ======================================================== 00:23:52.962 Latency(us) 00:23:52.962 Device Information : IOPS MiB/s Average min max 00:23:52.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1762.58 440.64 73512.35 49686.91 135745.86 00:23:52.962 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 605.67 151.42 219417.68 69196.02 325906.05 00:23:52.962 ======================================================== 00:23:52.962 Total : 2368.24 592.06 110826.95 49686.91 325906.05 00:23:52.962 00:23:52.962 14:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:53.224 No valid NVMe controllers or AIO or URING devices found 00:23:53.224 Initializing NVMe Controllers 00:23:53.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:53.224 Controller IO queue size 128, less than required. 00:23:53.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:53.224 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:53.224 Controller IO queue size 128, less than required. 00:23:53.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:53.224 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:53.224 WARNING: Some requested NVMe devices were skipped 00:23:53.224 14:37:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:55.773 Initializing NVMe Controllers 00:23:55.773 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:55.773 Controller IO queue size 128, less than required. 00:23:55.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:55.773 Controller IO queue size 128, less than required. 00:23:55.773 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:55.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:55.773 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:55.773 Initialization complete. Launching workers. 00:23:55.773 00:23:55.773 ==================== 00:23:55.773 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:55.773 TCP transport: 00:23:55.773 polls: 22131 00:23:55.773 idle_polls: 12071 00:23:55.773 sock_completions: 10060 00:23:55.773 nvme_completions: 7693 00:23:55.773 submitted_requests: 11532 00:23:55.773 queued_requests: 1 00:23:55.773 00:23:55.773 ==================== 00:23:55.773 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:55.773 TCP transport: 00:23:55.773 polls: 20467 00:23:55.773 idle_polls: 10476 00:23:55.773 sock_completions: 9991 00:23:55.773 nvme_completions: 6415 00:23:55.773 submitted_requests: 9662 00:23:55.773 queued_requests: 1 00:23:55.773 ======================================================== 00:23:55.773 Latency(us) 00:23:55.773 Device Information : IOPS MiB/s Average min max 00:23:55.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1919.37 479.84 67335.58 36354.60 121563.12 00:23:55.773 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1600.48 400.12 81182.27 37204.38 129286.54 00:23:55.773 ======================================================== 00:23:55.773 Total : 3519.85 879.96 73631.67 36354.60 129286.54 00:23:55.773 00:23:55.773 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:55.773 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:56.033 rmmod nvme_tcp 00:23:56.033 rmmod nvme_fabrics 00:23:56.033 rmmod nvme_keyring 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 3485995 ']' 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 3485995 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3485995 ']' 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3485995 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:56.033 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3485995 00:23:56.293 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:56.293 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:56.293 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3485995' 00:23:56.293 killing process with pid 3485995 00:23:56.293 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3485995 00:23:56.293 14:37:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3485995 00:23:58.202 14:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:58.202 14:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:58.202 14:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:58.202 14:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:58.202 14:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:23:58.202 14:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:23:58.202 14:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:58.202 14:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:58.202 14:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:58.202 14:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.202 14:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.202 14:37:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.113 14:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:00.375 00:24:00.375 real 0m24.284s 00:24:00.375 user 0m58.464s 00:24:00.375 sys 0m8.378s 00:24:00.375 14:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:00.375 14:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:00.375 ************************************ 00:24:00.375 END TEST nvmf_perf 00:24:00.375 ************************************ 00:24:00.375 14:37:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:00.375 14:37:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:00.375 14:37:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:00.375 14:37:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.375 ************************************ 00:24:00.375 START TEST nvmf_fio_host 00:24:00.375 ************************************ 00:24:00.375 14:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:00.375 * Looking for test storage... 00:24:00.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.375 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:00.375 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:00.375 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:00.637 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:00.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.638 --rc genhtml_branch_coverage=1 00:24:00.638 --rc genhtml_function_coverage=1 00:24:00.638 --rc genhtml_legend=1 00:24:00.638 --rc geninfo_all_blocks=1 00:24:00.638 --rc geninfo_unexecuted_blocks=1 00:24:00.638 00:24:00.638 ' 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:00.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.638 --rc genhtml_branch_coverage=1 00:24:00.638 --rc genhtml_function_coverage=1 00:24:00.638 --rc genhtml_legend=1 00:24:00.638 --rc geninfo_all_blocks=1 00:24:00.638 --rc geninfo_unexecuted_blocks=1 00:24:00.638 00:24:00.638 ' 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:00.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.638 --rc genhtml_branch_coverage=1 00:24:00.638 --rc genhtml_function_coverage=1 00:24:00.638 --rc genhtml_legend=1 00:24:00.638 --rc geninfo_all_blocks=1 00:24:00.638 --rc geninfo_unexecuted_blocks=1 00:24:00.638 00:24:00.638 ' 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:00.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.638 --rc genhtml_branch_coverage=1 00:24:00.638 --rc genhtml_function_coverage=1 00:24:00.638 --rc genhtml_legend=1 00:24:00.638 --rc geninfo_all_blocks=1 00:24:00.638 --rc geninfo_unexecuted_blocks=1 00:24:00.638 00:24:00.638 ' 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.638 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:00.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:00.639 14:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:08.775 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.775 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:08.776 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:08.776 Found net devices under 0000:31:00.0: cvl_0_0 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:08.776 Found net devices under 0000:31:00.1: cvl_0_1 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:08.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:24:08.776 00:24:08.776 --- 10.0.0.2 ping statistics --- 00:24:08.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.776 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:24:08.776 00:24:08.776 --- 10.0.0.1 ping statistics --- 00:24:08.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.776 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3493091 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3493091 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3493091 ']' 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:08.776 14:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.776 [2024-10-14 14:37:48.629307] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:24:08.776 [2024-10-14 14:37:48.629373] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.776 [2024-10-14 14:37:48.702822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:08.776 [2024-10-14 14:37:48.745883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.776 [2024-10-14 14:37:48.745922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.776 [2024-10-14 14:37:48.745930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.777 [2024-10-14 14:37:48.745937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.777 [2024-10-14 14:37:48.745943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.777 [2024-10-14 14:37:48.747894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.777 [2024-10-14 14:37:48.748026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.777 [2024-10-14 14:37:48.748188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:08.777 [2024-10-14 14:37:48.748189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.777 14:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:08.777 14:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:08.777 14:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:09.036 [2024-10-14 14:37:49.587718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.036 14:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:09.036 14:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:09.036 14:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.036 14:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:09.295 Malloc1 00:24:09.295 14:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.554 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:09.555 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.815 [2024-10-14 14:37:50.400767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.815 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:10.075 14:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:10.336 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:10.336 fio-3.35 00:24:10.336 Starting 1 thread 00:24:12.875 00:24:12.875 test: (groupid=0, jobs=1): err= 0: pid=3493655: Mon Oct 14 14:37:53 2024 00:24:12.875 read: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(104MiB/2004msec) 00:24:12.875 slat (usec): min=2, max=277, avg= 2.15, stdev= 2.40 00:24:12.875 clat (usec): min=3561, max=8959, avg=5284.52, stdev=748.56 00:24:12.875 lat (usec): min=3563, max=8961, avg=5286.67, stdev=748.58 00:24:12.875 clat percentiles (usec): 00:24:12.875 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:24:12.875 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:24:12.875 | 70.00th=[ 5276], 80.00th=[ 5473], 90.00th=[ 5932], 95.00th=[ 7242], 00:24:12.875 | 99.00th=[ 7898], 99.50th=[ 8094], 99.90th=[ 8455], 99.95th=[ 8586], 00:24:12.875 | 99.99th=[ 8979] 00:24:12.875 bw ( KiB/s): min=44976, max=56160, per=99.90%, avg=53212.00, stdev=5492.92, samples=4 00:24:12.875 iops : min=11244, max=14040, avg=13303.00, stdev=1373.23, samples=4 00:24:12.875 write: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(104MiB/2004msec); 0 zone resets 00:24:12.875 slat (usec): min=2, max=203, avg= 2.21, stdev= 1.45 00:24:12.875 clat (usec): min=2710, max=7736, avg=4267.96, stdev=617.17 00:24:12.875 lat (usec): min=2728, max=7738, avg=4270.17, stdev=617.21 00:24:12.875 clat percentiles (usec): 00:24:12.875 | 1.00th=[ 3458], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3884], 00:24:12.875 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4228], 00:24:12.875 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4948], 95.00th=[ 5932], 00:24:12.875 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 6915], 99.95th=[ 6980], 00:24:12.875 | 99.99th=[ 7570] 00:24:12.875 bw ( KiB/s): min=45384, max=56128, per=100.00%, avg=53218.00, stdev=5227.20, samples=4 00:24:12.875 iops : min=11346, max=14032, avg=13304.50, stdev=1306.80, samples=4 00:24:12.875 lat (msec) : 4=16.81%, 10=83.19% 00:24:12.875 cpu : usr=74.09%, sys=24.61%, ctx=32, majf=0, minf=17 00:24:12.875 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:12.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:12.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:12.875 issued rwts: total=26686,26656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:12.875 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:12.875 00:24:12.875 Run status group 0 (all jobs): 00:24:12.875 READ: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=104MiB (109MB), run=2004-2004msec 00:24:12.875 WRITE: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=104MiB (109MB), run=2004-2004msec 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:12.875 14:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:13.135 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:13.135 fio-3.35 00:24:13.135 Starting 1 thread 00:24:15.676 00:24:15.676 test: (groupid=0, jobs=1): err= 0: pid=3494483: Mon Oct 14 14:37:56 2024 00:24:15.676 read: IOPS=9357, BW=146MiB/s (153MB/s)(294MiB/2008msec) 00:24:15.676 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.63 00:24:15.676 clat (usec): min=1484, max=17628, avg=8261.95, stdev=1972.81 00:24:15.676 lat (usec): min=1487, max=17631, avg=8265.55, stdev=1972.98 00:24:15.676 clat percentiles (usec): 00:24:15.676 | 1.00th=[ 4293], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6456], 00:24:15.676 | 30.00th=[ 7111], 40.00th=[ 7701], 50.00th=[ 8160], 60.00th=[ 8717], 00:24:15.676 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:24:15.676 | 99.00th=[12911], 99.50th=[13698], 99.90th=[14484], 99.95th=[14746], 00:24:15.676 | 99.99th=[16581] 00:24:15.676 bw ( KiB/s): min=69664, max=82618, per=49.27%, avg=73758.50, stdev=5961.57, samples=4 00:24:15.676 iops : min= 4354, max= 5163, avg=4609.75, stdev=372.29, samples=4 00:24:15.676 write: IOPS=5370, BW=83.9MiB/s (88.0MB/s)(151MiB/1799msec); 0 zone resets 00:24:15.676 slat (usec): min=39, max=450, avg=41.05, stdev= 8.60 00:24:15.676 clat (usec): min=2677, max=16417, avg=9556.67, stdev=1632.08 00:24:15.676 lat (usec): min=2717, max=16554, avg=9597.71, stdev=1634.15 00:24:15.676 clat percentiles (usec): 00:24:15.676 | 1.00th=[ 6456], 5.00th=[ 7177], 10.00th=[ 7570], 20.00th=[ 8160], 00:24:15.676 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:24:15.676 | 70.00th=[10290], 80.00th=[10814], 90.00th=[11731], 95.00th=[12387], 00:24:15.676 | 99.00th=[13960], 99.50th=[14484], 99.90th=[15795], 99.95th=[16057], 00:24:15.676 | 99.99th=[16450] 00:24:15.676 bw ( KiB/s): min=72864, max=86099, per=89.42%, avg=76836.75, stdev=6251.00, samples=4 00:24:15.676 iops : min= 4554, max= 5381, avg=4802.25, stdev=390.59, samples=4 00:24:15.676 lat (msec) : 2=0.04%, 4=0.42%, 10=73.51%, 20=26.04% 00:24:15.676 cpu : usr=84.75%, sys=13.75%, ctx=17, majf=0, minf=25 00:24:15.676 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:15.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.676 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:15.676 issued rwts: total=18789,9661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.676 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:15.676 00:24:15.676 Run status group 0 (all jobs): 00:24:15.676 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=294MiB (308MB), run=2008-2008msec 00:24:15.676 WRITE: bw=83.9MiB/s (88.0MB/s), 83.9MiB/s-83.9MiB/s (88.0MB/s-88.0MB/s), io=151MiB (158MB), run=1799-1799msec 00:24:15.676 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:15.937 rmmod nvme_tcp 00:24:15.937 rmmod nvme_fabrics 00:24:15.937 rmmod nvme_keyring 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 3493091 ']' 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 3493091 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3493091 ']' 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3493091 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3493091 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3493091' 00:24:15.937 killing process with pid 3493091 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3493091 00:24:15.937 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3493091 00:24:16.198 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:16.198 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:16.198 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:16.198 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:16.198 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:24:16.198 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:16.198 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:24:16.198 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.198 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:16.198 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.198 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.198 14:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.108 14:37:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:18.108 00:24:18.108 real 0m17.854s 00:24:18.108 user 1m6.069s 00:24:18.108 sys 0m7.648s 00:24:18.108 14:37:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:18.108 14:37:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.108 ************************************ 00:24:18.108 END TEST nvmf_fio_host 00:24:18.108 ************************************ 00:24:18.108 14:37:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:18.108 14:37:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:18.108 14:37:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:18.108 14:37:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.367 ************************************ 00:24:18.367 START TEST nvmf_failover 00:24:18.367 ************************************ 00:24:18.367 14:37:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:18.367 * Looking for test storage... 00:24:18.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.367 14:37:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:18.367 14:37:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:18.367 14:37:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:18.367 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:18.367 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.367 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:18.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.368 --rc genhtml_branch_coverage=1 00:24:18.368 --rc genhtml_function_coverage=1 00:24:18.368 --rc genhtml_legend=1 00:24:18.368 --rc geninfo_all_blocks=1 00:24:18.368 --rc geninfo_unexecuted_blocks=1 00:24:18.368 00:24:18.368 ' 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:18.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.368 --rc genhtml_branch_coverage=1 00:24:18.368 --rc genhtml_function_coverage=1 00:24:18.368 --rc genhtml_legend=1 00:24:18.368 --rc geninfo_all_blocks=1 00:24:18.368 --rc geninfo_unexecuted_blocks=1 00:24:18.368 00:24:18.368 ' 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:18.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.368 --rc genhtml_branch_coverage=1 00:24:18.368 --rc genhtml_function_coverage=1 00:24:18.368 --rc genhtml_legend=1 00:24:18.368 --rc geninfo_all_blocks=1 00:24:18.368 --rc geninfo_unexecuted_blocks=1 00:24:18.368 00:24:18.368 ' 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:18.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.368 --rc genhtml_branch_coverage=1 00:24:18.368 --rc genhtml_function_coverage=1 00:24:18.368 --rc genhtml_legend=1 00:24:18.368 --rc geninfo_all_blocks=1 00:24:18.368 --rc geninfo_unexecuted_blocks=1 00:24:18.368 00:24:18.368 ' 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.368 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.628 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:18.628 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:18.628 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:18.628 14:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:26.767 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:26.767 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:26.767 Found net devices under 0000:31:00.0: cvl_0_0 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:26.767 Found net devices under 0000:31:00.1: cvl_0_1 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:26.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:24:26.767 00:24:26.767 --- 10.0.0.2 ping statistics --- 00:24:26.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.767 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:24:26.767 00:24:26.767 --- 10.0.0.1 ping statistics --- 00:24:26.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.767 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:26.767 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=3499207 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 3499207 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3499207 ']' 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:26.768 14:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:26.768 [2024-10-14 14:38:06.657820] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:24:26.768 [2024-10-14 14:38:06.657887] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.768 [2024-10-14 14:38:06.747815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:26.768 [2024-10-14 14:38:06.800382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.768 [2024-10-14 14:38:06.800432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.768 [2024-10-14 14:38:06.800440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.768 [2024-10-14 14:38:06.800447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.768 [2024-10-14 14:38:06.800453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.768 [2024-10-14 14:38:06.802562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.768 [2024-10-14 14:38:06.802729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.768 [2024-10-14 14:38:06.802730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:26.768 14:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.768 14:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:26.768 14:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:26.768 14:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:26.768 14:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:26.768 14:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.768 14:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:27.027 [2024-10-14 14:38:07.633812] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.027 14:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:27.288 Malloc0 00:24:27.288 14:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.549 14:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:27.549 14:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.809 [2024-10-14 14:38:08.392040] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.809 14:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:28.069 [2024-10-14 14:38:08.568484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:28.069 14:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:28.069 [2024-10-14 14:38:08.745025] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:28.069 14:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:28.069 14:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3499570 00:24:28.069 14:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:28.069 14:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3499570 /var/tmp/bdevperf.sock 00:24:28.069 14:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3499570 ']' 00:24:28.070 14:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.070 14:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:28.070 14:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.070 14:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:28.070 14:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.331 14:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:28.331 14:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:28.331 14:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:28.904 NVMe0n1 00:24:28.904 14:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:29.166 00:24:29.166 14:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3499901 00:24:29.166 14:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:29.166 14:38:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:30.108 14:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:30.368 [2024-10-14 14:38:10.843775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.368 [2024-10-14 14:38:10.843956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.843960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.843966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.843972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.843976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.843981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.843986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.843990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.843995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.843999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 [2024-10-14 14:38:10.844112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1634410 is same with the state(6) to be set 00:24:30.369 14:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:33.665 14:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:33.665 00:24:33.665 14:38:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:33.925 [2024-10-14 14:38:14.490877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.925 [2024-10-14 14:38:14.490912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.925 [2024-10-14 14:38:14.490918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.925 [2024-10-14 14:38:14.490923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.490996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 [2024-10-14 14:38:14.491298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16351c0 is same with the state(6) to be set 00:24:33.926 14:38:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:37.226 14:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.226 [2024-10-14 14:38:17.683241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.226 14:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:38.166 14:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:38.166 [2024-10-14 14:38:18.873263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636130 is same with the state(6) to be set 00:24:38.166 [2024-10-14 14:38:18.873298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636130 is same with the state(6) to be set 00:24:38.166 [2024-10-14 14:38:18.873304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636130 is same with the state(6) to be set 00:24:38.166 [2024-10-14 14:38:18.873309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636130 is same with the state(6) to be set 00:24:38.166 [2024-10-14 14:38:18.873313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636130 is same with the state(6) to be set 00:24:38.166 [2024-10-14 14:38:18.873318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636130 is same with the state(6) to be set 00:24:38.166 [2024-10-14 14:38:18.873323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1636130 is same with the state(6) to be set 00:24:38.426 14:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3499901 00:24:45.017 { 00:24:45.017 "results": [ 00:24:45.017 { 00:24:45.017 "job": "NVMe0n1", 00:24:45.017 "core_mask": "0x1", 00:24:45.017 "workload": "verify", 00:24:45.017 "status": "finished", 00:24:45.017 "verify_range": { 00:24:45.017 "start": 0, 00:24:45.017 "length": 16384 00:24:45.017 }, 00:24:45.017 "queue_depth": 128, 00:24:45.017 "io_size": 4096, 00:24:45.017 "runtime": 15.046017, 00:24:45.017 "iops": 11141.021574015236, 00:24:45.017 "mibps": 43.519615523497016, 00:24:45.017 "io_failed": 3989, 00:24:45.017 "io_timeout": 0, 00:24:45.017 "avg_latency_us": 11164.925462628995, 00:24:45.017 "min_latency_us": 781.6533333333333, 00:24:45.017 "max_latency_us": 44346.026666666665 00:24:45.017 } 00:24:45.017 ], 00:24:45.017 "core_count": 1 00:24:45.017 } 00:24:45.017 14:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3499570 00:24:45.017 14:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3499570 ']' 00:24:45.017 14:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3499570 00:24:45.017 14:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:45.017 14:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:45.017 14:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3499570 00:24:45.017 14:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:45.017 14:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:45.017 14:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3499570' 00:24:45.017 killing process with pid 3499570 00:24:45.017 14:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3499570 00:24:45.017 14:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3499570 00:24:45.017 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:45.017 [2024-10-14 14:38:08.824609] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:24:45.017 [2024-10-14 14:38:08.824690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3499570 ] 00:24:45.017 [2024-10-14 14:38:08.886824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.017 [2024-10-14 14:38:08.922824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.017 Running I/O for 15 seconds... 00:24:45.017 10939.00 IOPS, 42.73 MiB/s [2024-10-14T12:38:25.744Z] [2024-10-14 14:38:10.844588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.017 [2024-10-14 14:38:10.844624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.017 [2024-10-14 14:38:10.844643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.017 [2024-10-14 14:38:10.844659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.017 [2024-10-14 14:38:10.844676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bde40 is same with the state(6) to be set 00:24:45.017 [2024-10-14 14:38:10.844745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.844755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.844777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.844795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.844811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.844828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.844844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.844866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.844883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.844899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.844916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.844932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.844949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.844965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.844982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.017 [2024-10-14 14:38:10.844992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.017 [2024-10-14 14:38:10.845000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.018 [2024-10-14 14:38:10.845646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.018 [2024-10-14 14:38:10.845653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.845990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.845999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.019 [2024-10-14 14:38:10.846181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.019 [2024-10-14 14:38:10.846197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.019 [2024-10-14 14:38:10.846297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.019 [2024-10-14 14:38:10.846306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.020 [2024-10-14 14:38:10.846908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:45.020 [2024-10-14 14:38:10.846935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:45.020 [2024-10-14 14:38:10.846941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95696 len:8 PRP1 0x0 PRP2 0x0 00:24:45.020 [2024-10-14 14:38:10.846949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.020 [2024-10-14 14:38:10.846988] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10deeb0 was disconnected and freed. reset controller. 00:24:45.020 [2024-10-14 14:38:10.846998] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:45.020 [2024-10-14 14:38:10.847007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.020 [2024-10-14 14:38:10.850550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.021 [2024-10-14 14:38:10.850574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bde40 (9): Bad file descriptor 00:24:45.021 [2024-10-14 14:38:10.890252] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:45.021 11107.00 IOPS, 43.39 MiB/s [2024-10-14T12:38:25.748Z] 11250.00 IOPS, 43.95 MiB/s [2024-10-14T12:38:25.748Z] 11214.50 IOPS, 43.81 MiB/s [2024-10-14T12:38:25.748Z] [2024-10-14 14:38:14.492186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.021 [2024-10-14 14:38:14.492720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.021 [2024-10-14 14:38:14.492729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.492988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.492996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.022 [2024-10-14 14:38:14.493386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.022 [2024-10-14 14:38:14.493393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.023 [2024-10-14 14:38:14.493410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.023 [2024-10-14 14:38:14.493428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.023 [2024-10-14 14:38:14.493444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.023 [2024-10-14 14:38:14.493461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.023 [2024-10-14 14:38:14.493478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.023 [2024-10-14 14:38:14.493495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.023 [2024-10-14 14:38:14.493511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.023 [2024-10-14 14:38:14.493528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.023 [2024-10-14 14:38:14.493544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.023 [2024-10-14 14:38:14.493563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.023 [2024-10-14 14:38:14.493581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.023 [2024-10-14 14:38:14.493598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.023 [2024-10-14 14:38:14.493615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.023 [2024-10-14 14:38:14.493632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.493987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.493997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.494004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.494013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.494020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.494030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.494037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.023 [2024-10-14 14:38:14.494047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.023 [2024-10-14 14:38:14.494054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:14.494075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:14.494091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:14.494108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:14.494126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:14.494142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:14.494159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:14.494176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:14.494192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:14.494211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:14.494228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:14.494244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:14.494260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:14.494278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:14.494294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.024 [2024-10-14 14:38:14.494311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.024 [2024-10-14 14:38:14.494328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.024 [2024-10-14 14:38:14.494345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.024 [2024-10-14 14:38:14.494362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.024 [2024-10-14 14:38:14.494380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.024 [2024-10-14 14:38:14.494396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:45.024 [2024-10-14 14:38:14.494429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:45.024 [2024-10-14 14:38:14.494437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39312 len:8 PRP1 0x0 PRP2 0x0 00:24:45.024 [2024-10-14 14:38:14.494447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494485] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10e0f40 was disconnected and freed. reset controller. 00:24:45.024 [2024-10-14 14:38:14.494495] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:45.024 [2024-10-14 14:38:14.494515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.024 [2024-10-14 14:38:14.494524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.024 [2024-10-14 14:38:14.494539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.024 [2024-10-14 14:38:14.494554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.024 [2024-10-14 14:38:14.494571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:14.494579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.024 [2024-10-14 14:38:14.494603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bde40 (9): Bad file descriptor 00:24:45.024 [2024-10-14 14:38:14.498170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.024 [2024-10-14 14:38:14.542617] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:45.024 11169.80 IOPS, 43.63 MiB/s [2024-10-14T12:38:25.751Z] 11172.50 IOPS, 43.64 MiB/s [2024-10-14T12:38:25.751Z] 11146.00 IOPS, 43.54 MiB/s [2024-10-14T12:38:25.751Z] 11170.88 IOPS, 43.64 MiB/s [2024-10-14T12:38:25.751Z] 11153.22 IOPS, 43.57 MiB/s [2024-10-14T12:38:25.751Z] [2024-10-14 14:38:18.873472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.024 [2024-10-14 14:38:18.873507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:18.873524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.024 [2024-10-14 14:38:18.873532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:18.873542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.024 [2024-10-14 14:38:18.873550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:18.873560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.024 [2024-10-14 14:38:18.873568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:18.873578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.024 [2024-10-14 14:38:18.873591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:18.873600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.024 [2024-10-14 14:38:18.873608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:18.873617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.024 [2024-10-14 14:38:18.873624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:18.873634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.024 [2024-10-14 14:38:18.873641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.024 [2024-10-14 14:38:18.873650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.024 [2024-10-14 14:38:18.873658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.873949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.873967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.873985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.873995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.874114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.874131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.874149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.025 [2024-10-14 14:38:18.874166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.025 [2024-10-14 14:38:18.874349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.025 [2024-10-14 14:38:18.874359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.026 [2024-10-14 14:38:18.874456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.026 [2024-10-14 14:38:18.874932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.026 [2024-10-14 14:38:18.874944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.874952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.874961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.027 [2024-10-14 14:38:18.874969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.874979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.027 [2024-10-14 14:38:18.874986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.874996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.027 [2024-10-14 14:38:18.875003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.027 [2024-10-14 14:38:18.875021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.027 [2024-10-14 14:38:18.875038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.027 [2024-10-14 14:38:18.875055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.027 [2024-10-14 14:38:18.875077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.027 [2024-10-14 14:38:18.875620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.027 [2024-10-14 14:38:18.875630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.028 [2024-10-14 14:38:18.875637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.028 [2024-10-14 14:38:18.875647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.028 [2024-10-14 14:38:18.875654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.028 [2024-10-14 14:38:18.875664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.028 [2024-10-14 14:38:18.875672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.028 [2024-10-14 14:38:18.875681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.028 [2024-10-14 14:38:18.875689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.028 [2024-10-14 14:38:18.875698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.028 [2024-10-14 14:38:18.875706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.028 [2024-10-14 14:38:18.875716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.028 [2024-10-14 14:38:18.875724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.028 [2024-10-14 14:38:18.875734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.028 [2024-10-14 14:38:18.875741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.028 [2024-10-14 14:38:18.875751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.028 [2024-10-14 14:38:18.875760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.028 [2024-10-14 14:38:18.875782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:45.028 [2024-10-14 14:38:18.875789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:45.028 [2024-10-14 14:38:18.875796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46472 len:8 PRP1 0x0 PRP2 0x0 00:24:45.028 [2024-10-14 14:38:18.875805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.028 [2024-10-14 14:38:18.875842] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10e0da0 was disconnected and freed. reset controller. 00:24:45.028 [2024-10-14 14:38:18.875852] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:45.028 [2024-10-14 14:38:18.875873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.028 [2024-10-14 14:38:18.875882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.028 [2024-10-14 14:38:18.875894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.028 [2024-10-14 14:38:18.875901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.028 [2024-10-14 14:38:18.875910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.028 [2024-10-14 14:38:18.875918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.028 [2024-10-14 14:38:18.875926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.028 [2024-10-14 14:38:18.875934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.028 [2024-10-14 14:38:18.875943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.028 [2024-10-14 14:38:18.879531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.028 [2024-10-14 14:38:18.879559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bde40 (9): Bad file descriptor 00:24:45.028 [2024-10-14 14:38:18.918514] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:45.028 11145.10 IOPS, 43.54 MiB/s [2024-10-14T12:38:25.755Z] 11145.27 IOPS, 43.54 MiB/s [2024-10-14T12:38:25.755Z] 11151.00 IOPS, 43.56 MiB/s [2024-10-14T12:38:25.755Z] 11153.38 IOPS, 43.57 MiB/s [2024-10-14T12:38:25.755Z] 11166.86 IOPS, 43.62 MiB/s [2024-10-14T12:38:25.755Z] 11175.13 IOPS, 43.65 MiB/s 00:24:45.028 Latency(us) 00:24:45.028 [2024-10-14T12:38:25.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.028 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:45.028 Verification LBA range: start 0x0 length 0x4000 00:24:45.028 NVMe0n1 : 15.05 11141.02 43.52 265.12 0.00 11164.93 781.65 44346.03 00:24:45.028 [2024-10-14T12:38:25.755Z] =================================================================================================================== 00:24:45.028 [2024-10-14T12:38:25.755Z] Total : 11141.02 43.52 265.12 0.00 11164.93 781.65 44346.03 00:24:45.028 Received shutdown signal, test time was about 15.000000 seconds 00:24:45.028 00:24:45.028 Latency(us) 00:24:45.028 [2024-10-14T12:38:25.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.028 [2024-10-14T12:38:25.755Z] =================================================================================================================== 00:24:45.028 [2024-10-14T12:38:25.755Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3502841 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3502841 /var/tmp/bdevperf.sock 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3502841 ']' 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:45.028 [2024-10-14 14:38:25.455225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:45.028 [2024-10-14 14:38:25.631650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:45.028 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:45.289 NVMe0n1 00:24:45.289 14:38:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:45.858 00:24:45.858 14:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:46.119 00:24:46.119 14:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:46.119 14:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:46.380 14:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.380 14:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:49.678 14:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:49.678 14:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:49.678 14:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3503835 00:24:49.678 14:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:49.678 14:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3503835 00:24:50.621 { 00:24:50.621 "results": [ 00:24:50.621 { 00:24:50.621 "job": "NVMe0n1", 00:24:50.621 "core_mask": "0x1", 00:24:50.621 "workload": "verify", 00:24:50.621 "status": "finished", 00:24:50.621 "verify_range": { 00:24:50.621 "start": 0, 00:24:50.621 "length": 16384 00:24:50.621 }, 00:24:50.621 "queue_depth": 128, 00:24:50.621 "io_size": 4096, 00:24:50.621 "runtime": 1.006866, 00:24:50.621 "iops": 11059.068436117617, 00:24:50.621 "mibps": 43.19948607858444, 00:24:50.621 "io_failed": 0, 00:24:50.621 "io_timeout": 0, 00:24:50.621 "avg_latency_us": 11507.753383026493, 00:24:50.621 "min_latency_us": 1617.92, 00:24:50.621 "max_latency_us": 11687.253333333334 00:24:50.621 } 00:24:50.621 ], 00:24:50.621 "core_count": 1 00:24:50.621 } 00:24:50.883 14:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:50.883 [2024-10-14 14:38:25.119643] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:24:50.883 [2024-10-14 14:38:25.119702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502841 ] 00:24:50.883 [2024-10-14 14:38:25.181260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.883 [2024-10-14 14:38:25.216912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.883 [2024-10-14 14:38:27.012829] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:50.883 [2024-10-14 14:38:27.012877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.883 [2024-10-14 14:38:27.012890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.883 [2024-10-14 14:38:27.012900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.883 [2024-10-14 14:38:27.012908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.883 [2024-10-14 14:38:27.012917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.883 [2024-10-14 14:38:27.012924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.883 [2024-10-14 14:38:27.012933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.883 [2024-10-14 14:38:27.012940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.883 [2024-10-14 14:38:27.012948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.883 [2024-10-14 14:38:27.012978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.883 [2024-10-14 14:38:27.012994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x107be40 (9): Bad file descriptor 00:24:50.883 [2024-10-14 14:38:27.021491] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:50.883 Running I/O for 1 seconds... 00:24:50.883 10976.00 IOPS, 42.88 MiB/s 00:24:50.883 Latency(us) 00:24:50.883 [2024-10-14T12:38:31.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.883 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:50.883 Verification LBA range: start 0x0 length 0x4000 00:24:50.883 NVMe0n1 : 1.01 11059.07 43.20 0.00 0.00 11507.75 1617.92 11687.25 00:24:50.883 [2024-10-14T12:38:31.610Z] =================================================================================================================== 00:24:50.883 [2024-10-14T12:38:31.610Z] Total : 11059.07 43.20 0.00 0.00 11507.75 1617.92 11687.25 00:24:50.883 14:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:50.883 14:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:50.883 14:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:51.145 14:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:51.145 14:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:51.407 14:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:51.407 14:38:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:54.712 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:54.712 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:54.712 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3502841 00:24:54.712 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3502841 ']' 00:24:54.712 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3502841 00:24:54.712 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:54.712 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:54.712 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3502841 00:24:54.712 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:54.712 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:54.712 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3502841' 00:24:54.712 killing process with pid 3502841 00:24:54.712 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3502841 00:24:54.712 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3502841 00:24:54.974 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:54.974 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.974 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:54.974 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:54.974 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:54.974 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:54.974 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:54.974 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:54.974 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:54.974 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:54.974 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:54.974 rmmod nvme_tcp 00:24:55.236 rmmod nvme_fabrics 00:24:55.236 rmmod nvme_keyring 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 3499207 ']' 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 3499207 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3499207 ']' 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3499207 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3499207 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3499207' 00:24:55.236 killing process with pid 3499207 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3499207 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3499207 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.236 14:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.786 14:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.786 00:24:57.786 real 0m39.159s 00:24:57.786 user 1m59.339s 00:24:57.786 sys 0m8.498s 00:24:57.786 14:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:57.786 14:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:57.786 ************************************ 00:24:57.786 END TEST nvmf_failover 00:24:57.786 ************************************ 00:24:57.786 14:38:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:57.786 14:38:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:57.786 14:38:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:57.786 14:38:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.787 ************************************ 00:24:57.787 START TEST nvmf_host_discovery 00:24:57.787 ************************************ 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:57.787 * Looking for test storage... 00:24:57.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:57.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.787 --rc genhtml_branch_coverage=1 00:24:57.787 --rc genhtml_function_coverage=1 00:24:57.787 --rc genhtml_legend=1 00:24:57.787 --rc geninfo_all_blocks=1 00:24:57.787 --rc geninfo_unexecuted_blocks=1 00:24:57.787 00:24:57.787 ' 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:57.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.787 --rc genhtml_branch_coverage=1 00:24:57.787 --rc genhtml_function_coverage=1 00:24:57.787 --rc genhtml_legend=1 00:24:57.787 --rc geninfo_all_blocks=1 00:24:57.787 --rc geninfo_unexecuted_blocks=1 00:24:57.787 00:24:57.787 ' 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:57.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.787 --rc genhtml_branch_coverage=1 00:24:57.787 --rc genhtml_function_coverage=1 00:24:57.787 --rc genhtml_legend=1 00:24:57.787 --rc geninfo_all_blocks=1 00:24:57.787 --rc geninfo_unexecuted_blocks=1 00:24:57.787 00:24:57.787 ' 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:57.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.787 --rc genhtml_branch_coverage=1 00:24:57.787 --rc genhtml_function_coverage=1 00:24:57.787 --rc genhtml_legend=1 00:24:57.787 --rc geninfo_all_blocks=1 00:24:57.787 --rc geninfo_unexecuted_blocks=1 00:24:57.787 00:24:57.787 ' 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.787 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.788 14:38:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.937 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:05.938 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:05.938 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:05.938 Found net devices under 0000:31:00.0: cvl_0_0 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:05.938 Found net devices under 0000:31:00.1: cvl_0_1 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:25:05.938 00:25:05.938 --- 10.0.0.2 ping statistics --- 00:25:05.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.938 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:25:05.938 00:25:05.938 --- 10.0.0.1 ping statistics --- 00:25:05.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.938 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=3509117 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 3509117 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3509117 ']' 00:25:05.938 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.939 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.939 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.939 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.939 14:38:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.939 [2024-10-14 14:38:45.946167] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:25:05.939 [2024-10-14 14:38:45.946263] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.939 [2024-10-14 14:38:46.039195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.939 [2024-10-14 14:38:46.089061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.939 [2024-10-14 14:38:46.089123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.939 [2024-10-14 14:38:46.089131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.939 [2024-10-14 14:38:46.089138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.939 [2024-10-14 14:38:46.089145] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.939 [2024-10-14 14:38:46.089987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.200 [2024-10-14 14:38:46.798926] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.200 [2024-10-14 14:38:46.811159] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.200 null0 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.200 null1 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3509353 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3509353 /tmp/host.sock 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3509353 ']' 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:06.200 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:06.201 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:06.201 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:06.201 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:06.201 14:38:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.201 [2024-10-14 14:38:46.907655] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:25:06.201 [2024-10-14 14:38:46.907716] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3509353 ] 00:25:06.462 [2024-10-14 14:38:46.973842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.462 [2024-10-14 14:38:47.017083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.462 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.724 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.725 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.725 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.725 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.725 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.986 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:06.986 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:06.986 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.986 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.986 [2024-10-14 14:38:47.460747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.986 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.986 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:06.986 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:06.986 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.986 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:06.986 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.986 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:06.986 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:06.986 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.986 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:06.987 14:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:07.559 [2024-10-14 14:38:48.177245] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:07.559 [2024-10-14 14:38:48.177268] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:07.559 [2024-10-14 14:38:48.177282] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:07.559 [2024-10-14 14:38:48.265549] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:07.819 [2024-10-14 14:38:48.368823] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:07.819 [2024-10-14 14:38:48.368845] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:08.080 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.341 14:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.602 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.603 [2024-10-14 14:38:49.197396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:08.603 [2024-10-14 14:38:49.197582] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:08.603 [2024-10-14 14:38:49.197609] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:08.603 [2024-10-14 14:38:49.324434] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:08.603 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.864 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:08.864 14:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:08.864 [2024-10-14 14:38:49.586854] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:08.864 [2024-10-14 14:38:49.586873] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:08.864 [2024-10-14 14:38:49.586878] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:09.813 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.814 [2024-10-14 14:38:50.445353] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:09.814 [2024-10-14 14:38:50.445379] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:09.814 [2024-10-14 14:38:50.448498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.814 [2024-10-14 14:38:50.448518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.814 [2024-10-14 14:38:50.448529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.814 [2024-10-14 14:38:50.448536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.814 [2024-10-14 14:38:50.448545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.814 [2024-10-14 14:38:50.448552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.814 [2024-10-14 14:38:50.448560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:09.814 [2024-10-14 14:38:50.448567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.814 [2024-10-14 14:38:50.448575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2454e50 is same with the state(6) to be set 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:09.814 [2024-10-14 14:38:50.458501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2454e50 (9): Bad file descriptor 00:25:09.814 [2024-10-14 14:38:50.468542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.814 [2024-10-14 14:38:50.468793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.814 [2024-10-14 14:38:50.468808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2454e50 with addr=10.0.0.2, port=4420 00:25:09.814 [2024-10-14 14:38:50.468817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2454e50 is same with the state(6) to be set 00:25:09.814 [2024-10-14 14:38:50.468829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2454e50 (9): Bad file descriptor 00:25:09.814 [2024-10-14 14:38:50.468841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.814 [2024-10-14 14:38:50.468849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.814 [2024-10-14 14:38:50.468858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.814 [2024-10-14 14:38:50.468870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.814 [2024-10-14 14:38:50.478602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.814 [2024-10-14 14:38:50.478956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.814 [2024-10-14 14:38:50.478969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2454e50 with addr=10.0.0.2, port=4420 00:25:09.814 [2024-10-14 14:38:50.478976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2454e50 is same with the state(6) to be set 00:25:09.814 [2024-10-14 14:38:50.478987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2454e50 (9): Bad file descriptor 00:25:09.814 [2024-10-14 14:38:50.478998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.814 [2024-10-14 14:38:50.479004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.814 [2024-10-14 14:38:50.479011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.814 [2024-10-14 14:38:50.479022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.814 [2024-10-14 14:38:50.488654] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.814 [2024-10-14 14:38:50.488978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.814 [2024-10-14 14:38:50.488991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2454e50 with addr=10.0.0.2, port=4420 00:25:09.814 [2024-10-14 14:38:50.488999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2454e50 is same with the state(6) to be set 00:25:09.814 [2024-10-14 14:38:50.489010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2454e50 (9): Bad file descriptor 00:25:09.814 [2024-10-14 14:38:50.489026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.814 [2024-10-14 14:38:50.489032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.814 [2024-10-14 14:38:50.489040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.814 [2024-10-14 14:38:50.489055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.814 [2024-10-14 14:38:50.498708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.814 [2024-10-14 14:38:50.499067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.814 [2024-10-14 14:38:50.499081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2454e50 with addr=10.0.0.2, port=4420 00:25:09.814 [2024-10-14 14:38:50.499088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2454e50 is same with the state(6) to be set 00:25:09.814 [2024-10-14 14:38:50.499099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2454e50 (9): Bad file descriptor 00:25:09.814 [2024-10-14 14:38:50.499117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.814 [2024-10-14 14:38:50.499124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.814 [2024-10-14 14:38:50.499131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.814 [2024-10-14 14:38:50.499141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.814 [2024-10-14 14:38:50.508764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.814 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.814 [2024-10-14 14:38:50.509299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.814 [2024-10-14 14:38:50.509339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2454e50 with addr=10.0.0.2, port=4420 00:25:09.814 [2024-10-14 14:38:50.509349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2454e50 is same with the state(6) to be set 00:25:09.814 [2024-10-14 14:38:50.509368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2454e50 (9): Bad file descriptor 00:25:09.814 [2024-10-14 14:38:50.509405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.814 state 00:25:09.814 [2024-10-14 14:38:50.509428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.814 [2024-10-14 14:38:50.509437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.814 [2024-10-14 14:38:50.509452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.814 [2024-10-14 14:38:50.518821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.814 [2024-10-14 14:38:50.519295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.814 [2024-10-14 14:38:50.519334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2454e50 with addr=10.0.0.2, port=4420 00:25:09.814 [2024-10-14 14:38:50.519345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2454e50 is same with the state(6) to be set 00:25:09.814 [2024-10-14 14:38:50.519364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2454e50 (9): Bad file descriptor 00:25:09.814 [2024-10-14 14:38:50.519391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.814 [2024-10-14 14:38:50.519400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.814 [2024-10-14 14:38:50.519408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.814 [2024-10-14 14:38:50.519431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.814 [2024-10-14 14:38:50.528881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:09.814 [2024-10-14 14:38:50.529333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.814 [2024-10-14 14:38:50.529372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2454e50 with addr=10.0.0.2, port=4420 00:25:09.814 [2024-10-14 14:38:50.529384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2454e50 is same with the state(6) to be set 00:25:09.814 [2024-10-14 14:38:50.529402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2454e50 (9): Bad file descriptor 00:25:09.814 [2024-10-14 14:38:50.529429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:09.814 [2024-10-14 14:38:50.529437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:09.814 [2024-10-14 14:38:50.529446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:09.814 [2024-10-14 14:38:50.529479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:09.814 [2024-10-14 14:38:50.532436] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:09.814 [2024-10-14 14:38:50.532455] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:10.075 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:10.076 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:10.076 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:10.076 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:10.076 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:10.076 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:10.076 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:10.076 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.076 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.076 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:10.076 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.336 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:10.336 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:10.336 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:10.336 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.336 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:10.336 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.336 14:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.339 [2024-10-14 14:38:51.886011] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:11.339 [2024-10-14 14:38:51.886032] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:11.339 [2024-10-14 14:38:51.886045] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:11.339 [2024-10-14 14:38:51.973315] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:11.655 [2024-10-14 14:38:52.283536] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:11.655 [2024-10-14 14:38:52.283571] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.655 request: 00:25:11.655 { 00:25:11.655 "name": "nvme", 00:25:11.655 "trtype": "tcp", 00:25:11.655 "traddr": "10.0.0.2", 00:25:11.655 "adrfam": "ipv4", 00:25:11.655 "trsvcid": "8009", 00:25:11.655 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:11.655 "wait_for_attach": true, 00:25:11.655 "method": "bdev_nvme_start_discovery", 00:25:11.655 "req_id": 1 00:25:11.655 } 00:25:11.655 Got JSON-RPC error response 00:25:11.655 response: 00:25:11.655 { 00:25:11.655 "code": -17, 00:25:11.655 "message": "File exists" 00:25:11.655 } 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:11.655 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.924 request: 00:25:11.924 { 00:25:11.924 "name": "nvme_second", 00:25:11.924 "trtype": "tcp", 00:25:11.924 "traddr": "10.0.0.2", 00:25:11.924 "adrfam": "ipv4", 00:25:11.924 "trsvcid": "8009", 00:25:11.924 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:11.924 "wait_for_attach": true, 00:25:11.924 "method": "bdev_nvme_start_discovery", 00:25:11.924 "req_id": 1 00:25:11.924 } 00:25:11.924 Got JSON-RPC error response 00:25:11.924 response: 00:25:11.924 { 00:25:11.924 "code": -17, 00:25:11.924 "message": "File exists" 00:25:11.924 } 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.924 14:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.866 [2024-10-14 14:38:53.539042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.866 [2024-10-14 14:38:53.539078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2454b50 with addr=10.0.0.2, port=8010 00:25:12.866 [2024-10-14 14:38:53.539094] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:12.866 [2024-10-14 14:38:53.539102] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:12.866 [2024-10-14 14:38:53.539110] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:14.250 [2024-10-14 14:38:54.541447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.250 [2024-10-14 14:38:54.541489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2454b50 with addr=10.0.0.2, port=8010 00:25:14.250 [2024-10-14 14:38:54.541505] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:14.250 [2024-10-14 14:38:54.541513] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:14.250 [2024-10-14 14:38:54.541521] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:14.822 [2024-10-14 14:38:55.543364] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:14.822 request: 00:25:14.822 { 00:25:14.822 "name": "nvme_second", 00:25:14.822 "trtype": "tcp", 00:25:14.822 "traddr": "10.0.0.2", 00:25:14.822 "adrfam": "ipv4", 00:25:14.822 "trsvcid": "8010", 00:25:14.822 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:14.822 "wait_for_attach": false, 00:25:14.822 "attach_timeout_ms": 3000, 00:25:14.822 "method": "bdev_nvme_start_discovery", 00:25:14.822 "req_id": 1 00:25:14.822 } 00:25:14.822 Got JSON-RPC error response 00:25:14.822 response: 00:25:14.822 { 00:25:14.822 "code": -110, 00:25:14.822 "message": "Connection timed out" 00:25:14.822 } 00:25:14.822 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:14.822 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:14.822 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:14.822 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:14.822 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3509353 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:15.082 rmmod nvme_tcp 00:25:15.082 rmmod nvme_fabrics 00:25:15.082 rmmod nvme_keyring 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 3509117 ']' 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 3509117 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3509117 ']' 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3509117 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3509117 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3509117' 00:25:15.082 killing process with pid 3509117 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3509117 00:25:15.082 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3509117 00:25:15.343 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:15.343 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:15.343 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:15.343 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:15.343 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:25:15.343 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:15.343 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:25:15.343 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:15.343 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:15.343 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.343 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.343 14:38:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.255 14:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:17.255 00:25:17.255 real 0m19.830s 00:25:17.255 user 0m22.211s 00:25:17.255 sys 0m7.336s 00:25:17.255 14:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:17.255 14:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.255 ************************************ 00:25:17.255 END TEST nvmf_host_discovery 00:25:17.255 ************************************ 00:25:17.255 14:38:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:17.255 14:38:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:17.255 14:38:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:17.255 14:38:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.516 ************************************ 00:25:17.516 START TEST nvmf_host_multipath_status 00:25:17.516 ************************************ 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:17.516 * Looking for test storage... 00:25:17.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:17.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.516 --rc genhtml_branch_coverage=1 00:25:17.516 --rc genhtml_function_coverage=1 00:25:17.516 --rc genhtml_legend=1 00:25:17.516 --rc geninfo_all_blocks=1 00:25:17.516 --rc geninfo_unexecuted_blocks=1 00:25:17.516 00:25:17.516 ' 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:17.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.516 --rc genhtml_branch_coverage=1 00:25:17.516 --rc genhtml_function_coverage=1 00:25:17.516 --rc genhtml_legend=1 00:25:17.516 --rc geninfo_all_blocks=1 00:25:17.516 --rc geninfo_unexecuted_blocks=1 00:25:17.516 00:25:17.516 ' 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:17.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.516 --rc genhtml_branch_coverage=1 00:25:17.516 --rc genhtml_function_coverage=1 00:25:17.516 --rc genhtml_legend=1 00:25:17.516 --rc geninfo_all_blocks=1 00:25:17.516 --rc geninfo_unexecuted_blocks=1 00:25:17.516 00:25:17.516 ' 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:17.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.516 --rc genhtml_branch_coverage=1 00:25:17.516 --rc genhtml_function_coverage=1 00:25:17.516 --rc genhtml_legend=1 00:25:17.516 --rc geninfo_all_blocks=1 00:25:17.516 --rc geninfo_unexecuted_blocks=1 00:25:17.516 00:25:17.516 ' 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.516 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:17.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:17.517 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:17.517 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:17.517 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:17.517 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:17.517 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:17.517 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:17.777 14:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:25.923 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:25.923 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:25.923 Found net devices under 0000:31:00.0: cvl_0_0 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:25.923 Found net devices under 0000:31:00.1: cvl_0_1 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:25.923 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:25.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:25:25.924 00:25:25.924 --- 10.0.0.2 ping statistics --- 00:25:25.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.924 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:25.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:25:25.924 00:25:25.924 --- 10.0.0.1 ping statistics --- 00:25:25.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.924 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=3515705 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 3515705 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3515705 ']' 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:25.924 14:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:25.924 [2024-10-14 14:39:06.012606] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:25:25.924 [2024-10-14 14:39:06.012692] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.924 [2024-10-14 14:39:06.085780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:25.924 [2024-10-14 14:39:06.128427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.924 [2024-10-14 14:39:06.128464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.924 [2024-10-14 14:39:06.128472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.924 [2024-10-14 14:39:06.128479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.924 [2024-10-14 14:39:06.128485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.924 [2024-10-14 14:39:06.129877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.924 [2024-10-14 14:39:06.129880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.185 14:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.185 14:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:26.185 14:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:26.185 14:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:26.185 14:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:26.185 14:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.185 14:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3515705 00:25:26.185 14:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:26.446 [2024-10-14 14:39:06.988551] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.447 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:26.709 Malloc0 00:25:26.709 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:26.709 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:26.970 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.970 [2024-10-14 14:39:07.681377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.970 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:27.232 [2024-10-14 14:39:07.845756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:27.232 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3516062 00:25:27.232 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:27.232 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:27.232 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3516062 /var/tmp/bdevperf.sock 00:25:27.232 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3516062 ']' 00:25:27.232 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:27.232 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:27.232 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:27.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:27.232 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:27.232 14:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:27.493 14:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:27.493 14:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:27.493 14:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:27.753 14:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:28.014 Nvme0n1 00:25:28.014 14:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:28.276 Nvme0n1 00:25:28.538 14:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:28.538 14:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:30.457 14:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:30.457 14:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:30.719 14:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:30.719 14:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:32.106 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:32.106 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:32.106 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.106 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:32.106 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.106 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:32.106 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.106 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.106 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.106 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.106 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.106 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.368 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.368 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.368 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.368 14:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.629 14:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.629 14:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:32.629 14:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.629 14:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:32.890 14:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.890 14:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:32.890 14:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.890 14:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:32.890 14:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.890 14:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:32.890 14:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:33.151 14:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:33.412 14:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:34.355 14:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:34.355 14:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:34.355 14:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.355 14:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:34.616 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.616 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:34.616 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.616 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:34.616 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.616 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:34.616 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.616 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:34.877 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.877 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:34.877 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.877 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.138 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.138 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:35.138 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.138 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:35.138 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.138 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:35.138 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.138 14:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:35.400 14:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.400 14:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:35.400 14:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:35.661 14:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:35.661 14:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:37.047 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:37.047 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:37.047 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.047 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:37.047 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.047 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:37.047 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.047 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:37.047 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.047 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:37.047 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:37.047 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.309 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.309 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:37.309 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.309 14:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:37.570 14:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.570 14:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:37.570 14:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.570 14:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:37.570 14:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.570 14:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:37.570 14:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.570 14:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:37.831 14:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.831 14:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:37.831 14:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:38.092 14:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:38.352 14:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:39.297 14:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:39.297 14:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:39.297 14:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.297 14:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:39.559 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.559 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:39.560 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.560 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:39.560 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:39.560 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:39.560 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.560 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:39.822 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.822 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:39.822 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:39.822 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.084 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.084 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:40.084 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:40.084 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.084 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.084 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:40.084 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.084 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:40.344 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.344 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:40.344 14:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:40.604 14:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:40.604 14:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:41.989 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:41.989 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:41.990 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.990 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:41.990 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:41.990 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:41.990 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.990 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:41.990 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:41.990 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:41.990 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.990 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:42.250 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.250 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:42.250 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.250 14:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:42.511 14:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.511 14:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:42.511 14:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.511 14:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:42.511 14:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:42.511 14:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:42.511 14:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.511 14:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:42.771 14:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:42.771 14:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:42.771 14:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:43.032 14:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:43.294 14:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:44.238 14:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:44.238 14:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:44.238 14:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.238 14:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:44.499 14:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.499 14:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:44.500 14:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.500 14:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:44.500 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.500 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:44.500 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:44.500 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.760 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.760 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:44.760 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.760 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.020 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.020 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:45.021 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.021 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.021 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.021 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:45.021 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.021 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:45.282 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.282 14:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:45.543 14:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:45.543 14:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:45.543 14:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:45.803 14:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:46.748 14:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:46.748 14:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:46.748 14:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.748 14:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:47.009 14:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.009 14:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:47.009 14:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.009 14:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:47.269 14:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.269 14:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:47.269 14:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:47.269 14:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.528 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.528 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:47.528 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.528 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:47.528 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.528 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:47.528 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.528 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:47.788 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.788 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:47.788 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.788 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.047 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.047 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:48.047 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:48.047 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:48.308 14:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:49.252 14:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:49.252 14:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:49.252 14:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.252 14:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:49.513 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.513 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:49.513 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.513 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:49.774 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.774 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:49.774 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.774 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:49.774 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.774 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:49.774 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.774 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.035 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.035 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:50.035 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.035 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:50.297 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.297 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:50.297 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.297 14:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:50.558 14:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.558 14:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:50.558 14:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:50.558 14:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:50.818 14:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:51.762 14:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:51.762 14:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:51.762 14:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.762 14:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:52.023 14:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.023 14:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:52.023 14:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.023 14:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:52.285 14:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.285 14:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:52.285 14:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.285 14:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:52.285 14:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.285 14:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:52.285 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.285 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:52.547 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.547 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:52.547 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:52.547 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.809 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.809 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:52.809 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.809 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:53.071 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.071 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:53.071 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:53.071 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:53.332 14:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:54.277 14:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:54.277 14:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:54.277 14:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.277 14:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:54.539 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.539 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:54.539 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.539 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:54.801 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.801 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:54.801 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.801 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:54.801 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.801 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:54.801 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.801 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:55.062 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.062 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:55.062 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.062 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:55.324 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:55.324 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:55.324 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.324 14:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:55.324 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:55.324 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3516062 00:25:55.324 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3516062 ']' 00:25:55.324 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3516062 00:25:55.324 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:55.324 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:55.324 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3516062 00:25:55.589 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:55.589 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:55.589 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3516062' 00:25:55.589 killing process with pid 3516062 00:25:55.589 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3516062 00:25:55.589 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3516062 00:25:55.589 { 00:25:55.589 "results": [ 00:25:55.589 { 00:25:55.589 "job": "Nvme0n1", 00:25:55.589 "core_mask": "0x4", 00:25:55.589 "workload": "verify", 00:25:55.589 "status": "terminated", 00:25:55.589 "verify_range": { 00:25:55.589 "start": 0, 00:25:55.589 "length": 16384 00:25:55.589 }, 00:25:55.589 "queue_depth": 128, 00:25:55.589 "io_size": 4096, 00:25:55.589 "runtime": 26.915542, 00:25:55.589 "iops": 10617.508649835103, 00:25:55.589 "mibps": 41.47464316341837, 00:25:55.589 "io_failed": 0, 00:25:55.589 "io_timeout": 0, 00:25:55.589 "avg_latency_us": 12038.000132877965, 00:25:55.589 "min_latency_us": 291.84, 00:25:55.589 "max_latency_us": 3019898.88 00:25:55.589 } 00:25:55.589 ], 00:25:55.589 "core_count": 1 00:25:55.589 } 00:25:55.589 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3516062 00:25:55.589 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:55.589 [2024-10-14 14:39:07.909214] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:25:55.589 [2024-10-14 14:39:07.909272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3516062 ] 00:25:55.589 [2024-10-14 14:39:07.961648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.589 [2024-10-14 14:39:07.990212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.589 Running I/O for 90 seconds... 00:25:55.589 9281.00 IOPS, 36.25 MiB/s [2024-10-14T12:39:36.316Z] 9340.50 IOPS, 36.49 MiB/s [2024-10-14T12:39:36.316Z] 9388.00 IOPS, 36.67 MiB/s [2024-10-14T12:39:36.316Z] 9401.00 IOPS, 36.72 MiB/s [2024-10-14T12:39:36.316Z] 9637.20 IOPS, 37.65 MiB/s [2024-10-14T12:39:36.316Z] 10176.83 IOPS, 39.75 MiB/s [2024-10-14T12:39:36.316Z] 10553.43 IOPS, 41.22 MiB/s [2024-10-14T12:39:36.316Z] 10542.25 IOPS, 41.18 MiB/s [2024-10-14T12:39:36.316Z] 10422.67 IOPS, 40.71 MiB/s [2024-10-14T12:39:36.316Z] 10323.90 IOPS, 40.33 MiB/s [2024-10-14T12:39:36.316Z] 10242.45 IOPS, 40.01 MiB/s [2024-10-14T12:39:36.316Z] [2024-10-14 14:39:21.134602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.134636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.134674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.134690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.134706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.134721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.134737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.134752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.134769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.589 [2024-10-14 14:39:21.134784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.589 [2024-10-14 14:39:21.134807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.589 [2024-10-14 14:39:21.134823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.589 [2024-10-14 14:39:21.134838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.589 [2024-10-14 14:39:21.134854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.589 [2024-10-14 14:39:21.134871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.589 [2024-10-14 14:39:21.134887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.134903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.134921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.134939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.134954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.134970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.134986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.134996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.135003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.135015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.135020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.135031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.135037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:55.589 [2024-10-14 14:39:21.135049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.589 [2024-10-14 14:39:21.135054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.590 [2024-10-14 14:39:21.135540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.590 [2024-10-14 14:39:21.135555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.590 [2024-10-14 14:39:21.135572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.590 [2024-10-14 14:39:21.135587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.590 [2024-10-14 14:39:21.135604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.590 [2024-10-14 14:39:21.135620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.590 [2024-10-14 14:39:21.135635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.590 [2024-10-14 14:39:21.135652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.590 [2024-10-14 14:39:21.135667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:55.590 [2024-10-14 14:39:21.135677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.590 [2024-10-14 14:39:21.135682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:55.590 10182.25 IOPS, 39.77 MiB/s [2024-10-14T12:39:36.317Z] [2024-10-14 14:39:21.136215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.136991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.136996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.137011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.591 [2024-10-14 14:39:21.137016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:55.591 [2024-10-14 14:39:21.137030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.592 [2024-10-14 14:39:21.137036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.592 [2024-10-14 14:39:21.137055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.592 [2024-10-14 14:39:21.137079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.592 [2024-10-14 14:39:21.137101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.592 [2024-10-14 14:39:21.137121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.592 [2024-10-14 14:39:21.137140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.592 [2024-10-14 14:39:21.137160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.592 [2024-10-14 14:39:21.137179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.592 [2024-10-14 14:39:21.137198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:21.137575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:21.137580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:55.592 9399.00 IOPS, 36.71 MiB/s [2024-10-14T12:39:36.319Z] 8727.64 IOPS, 34.09 MiB/s [2024-10-14T12:39:36.319Z] 8145.80 IOPS, 31.82 MiB/s [2024-10-14T12:39:36.319Z] 8432.88 IOPS, 32.94 MiB/s [2024-10-14T12:39:36.319Z] 8698.35 IOPS, 33.98 MiB/s [2024-10-14T12:39:36.319Z] 9101.39 IOPS, 35.55 MiB/s [2024-10-14T12:39:36.319Z] 9492.26 IOPS, 37.08 MiB/s [2024-10-14T12:39:36.319Z] 9772.95 IOPS, 38.18 MiB/s [2024-10-14T12:39:36.319Z] 9914.52 IOPS, 38.73 MiB/s [2024-10-14T12:39:36.319Z] 10057.82 IOPS, 39.29 MiB/s [2024-10-14T12:39:36.319Z] 10289.09 IOPS, 40.19 MiB/s [2024-10-14T12:39:36.319Z] 10548.88 IOPS, 41.21 MiB/s [2024-10-14T12:39:36.319Z] [2024-10-14 14:39:33.878153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.592 [2024-10-14 14:39:33.878191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:33.878221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.592 [2024-10-14 14:39:33.878228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:33.878238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.592 [2024-10-14 14:39:33.878249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:33.878259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:33.878265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:33.878275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:33.878281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:33.878292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:33.878297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:33.878914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:33.878925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:33.878937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.592 [2024-10-14 14:39:33.878942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:33.878953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.592 [2024-10-14 14:39:33.878959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:33.878969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.592 [2024-10-14 14:39:33.878975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:55.592 [2024-10-14 14:39:33.878985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.593 [2024-10-14 14:39:33.878991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:55.593 [2024-10-14 14:39:33.879001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.593 [2024-10-14 14:39:33.879006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:55.593 [2024-10-14 14:39:33.879017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.593 [2024-10-14 14:39:33.879022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:55.593 [2024-10-14 14:39:33.879034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.593 [2024-10-14 14:39:33.879039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:55.593 [2024-10-14 14:39:33.879050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:55.593 [2024-10-14 14:39:33.879055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:55.593 [2024-10-14 14:39:33.879073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.593 [2024-10-14 14:39:33.879078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:55.593 [2024-10-14 14:39:33.879089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.593 [2024-10-14 14:39:33.879094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:55.593 10706.40 IOPS, 41.82 MiB/s [2024-10-14T12:39:36.320Z] 10654.77 IOPS, 41.62 MiB/s [2024-10-14T12:39:36.320Z] Received shutdown signal, test time was about 26.916150 seconds 00:25:55.593 00:25:55.593 Latency(us) 00:25:55.593 [2024-10-14T12:39:36.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.593 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:55.593 Verification LBA range: start 0x0 length 0x4000 00:25:55.593 Nvme0n1 : 26.92 10617.51 41.47 0.00 0.00 12038.00 291.84 3019898.88 00:25:55.593 [2024-10-14T12:39:36.320Z] =================================================================================================================== 00:25:55.593 [2024-10-14T12:39:36.320Z] Total : 10617.51 41.47 0.00 0.00 12038.00 291.84 3019898.88 00:25:55.593 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:55.855 rmmod nvme_tcp 00:25:55.855 rmmod nvme_fabrics 00:25:55.855 rmmod nvme_keyring 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 3515705 ']' 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 3515705 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3515705 ']' 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3515705 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3515705 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3515705' 00:25:55.855 killing process with pid 3515705 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3515705 00:25:55.855 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3515705 00:25:56.117 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:56.117 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:56.117 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:56.117 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:56.117 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:25:56.117 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:56.117 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:25:56.117 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:56.117 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:56.117 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.117 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.117 14:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.031 14:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:58.031 00:25:58.031 real 0m40.705s 00:25:58.031 user 1m44.474s 00:25:58.031 sys 0m11.657s 00:25:58.031 14:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:58.031 14:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:58.031 ************************************ 00:25:58.031 END TEST nvmf_host_multipath_status 00:25:58.031 ************************************ 00:25:58.031 14:39:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:58.031 14:39:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:58.031 14:39:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:58.031 14:39:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.293 ************************************ 00:25:58.293 START TEST nvmf_discovery_remove_ifc 00:25:58.293 ************************************ 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:58.293 * Looking for test storage... 00:25:58.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:58.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.293 --rc genhtml_branch_coverage=1 00:25:58.293 --rc genhtml_function_coverage=1 00:25:58.293 --rc genhtml_legend=1 00:25:58.293 --rc geninfo_all_blocks=1 00:25:58.293 --rc geninfo_unexecuted_blocks=1 00:25:58.293 00:25:58.293 ' 00:25:58.293 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:58.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.293 --rc genhtml_branch_coverage=1 00:25:58.293 --rc genhtml_function_coverage=1 00:25:58.293 --rc genhtml_legend=1 00:25:58.293 --rc geninfo_all_blocks=1 00:25:58.294 --rc geninfo_unexecuted_blocks=1 00:25:58.294 00:25:58.294 ' 00:25:58.294 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:58.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.294 --rc genhtml_branch_coverage=1 00:25:58.294 --rc genhtml_function_coverage=1 00:25:58.294 --rc genhtml_legend=1 00:25:58.294 --rc geninfo_all_blocks=1 00:25:58.294 --rc geninfo_unexecuted_blocks=1 00:25:58.294 00:25:58.294 ' 00:25:58.294 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:58.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.294 --rc genhtml_branch_coverage=1 00:25:58.294 --rc genhtml_function_coverage=1 00:25:58.294 --rc genhtml_legend=1 00:25:58.294 --rc geninfo_all_blocks=1 00:25:58.294 --rc geninfo_unexecuted_blocks=1 00:25:58.294 00:25:58.294 ' 00:25:58.294 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:58.294 14:39:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.294 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:58.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:58.555 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:58.556 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:58.556 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:58.556 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:58.556 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:58.556 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:58.556 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.556 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.556 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.556 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:58.556 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:58.556 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:58.556 14:39:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:06.700 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:06.700 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:06.700 Found net devices under 0000:31:00.0: cvl_0_0 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:06.700 Found net devices under 0000:31:00.1: cvl_0_1 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:06.700 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:06.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:26:06.701 00:26:06.701 --- 10.0.0.2 ping statistics --- 00:26:06.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.701 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:06.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:26:06.701 00:26:06.701 --- 10.0.0.1 ping statistics --- 00:26:06.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.701 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=3526449 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 3526449 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3526449 ']' 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:06.701 14:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.701 [2024-10-14 14:39:46.488958] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:26:06.701 [2024-10-14 14:39:46.489025] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.701 [2024-10-14 14:39:46.579318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.701 [2024-10-14 14:39:46.630578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.701 [2024-10-14 14:39:46.630629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.701 [2024-10-14 14:39:46.630637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.701 [2024-10-14 14:39:46.630644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.701 [2024-10-14 14:39:46.630651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.701 [2024-10-14 14:39:46.631501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.701 [2024-10-14 14:39:47.360375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.701 [2024-10-14 14:39:47.368627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:06.701 null0 00:26:06.701 [2024-10-14 14:39:47.400597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3526557 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3526557 /tmp/host.sock 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3526557 ']' 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:06.701 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:06.701 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.962 [2024-10-14 14:39:47.479000] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:26:06.962 [2024-10-14 14:39:47.479073] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526557 ] 00:26:06.962 [2024-10-14 14:39:47.545129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.962 [2024-10-14 14:39:47.588553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.962 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:06.962 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:06.962 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:06.962 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:06.962 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.962 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.962 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.962 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:06.962 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.962 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.962 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.962 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:06.962 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.962 14:39:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.345 [2024-10-14 14:39:48.748270] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:08.345 [2024-10-14 14:39:48.748298] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:08.345 [2024-10-14 14:39:48.748312] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:08.345 [2024-10-14 14:39:48.836589] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:08.345 [2024-10-14 14:39:49.021321] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:08.345 [2024-10-14 14:39:49.021376] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:08.345 [2024-10-14 14:39:49.021399] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:08.345 [2024-10-14 14:39:49.021413] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:08.345 [2024-10-14 14:39:49.021433] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:08.345 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.345 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:08.345 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:08.345 [2024-10-14 14:39:49.026814] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f9c2d0 was disconnected and freed. delete nvme_qpair. 00:26:08.345 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.345 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:08.345 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.345 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:08.345 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.345 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:08.345 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.345 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:08.605 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:08.605 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:08.605 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:08.605 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:08.605 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.605 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:08.605 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.605 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:08.605 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.605 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:08.605 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.605 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:08.605 14:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:09.548 14:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:09.548 14:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.548 14:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:09.548 14:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.548 14:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:09.548 14:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.548 14:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:09.808 14:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.808 14:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:09.808 14:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:10.751 14:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:10.751 14:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:10.751 14:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:10.751 14:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.751 14:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:10.751 14:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.751 14:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:10.751 14:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.751 14:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:10.751 14:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:11.693 14:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:11.693 14:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.693 14:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:11.693 14:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.693 14:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:11.693 14:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.693 14:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:11.693 14:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.693 14:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:11.693 14:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.077 14:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.077 14:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.077 14:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.077 14:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.077 14:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.077 14:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.077 14:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.077 14:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.077 14:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:13.077 14:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:14.020 [2024-10-14 14:39:54.461991] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:14.020 [2024-10-14 14:39:54.462038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.020 [2024-10-14 14:39:54.462050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.020 [2024-10-14 14:39:54.462060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.020 [2024-10-14 14:39:54.462072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.020 [2024-10-14 14:39:54.462080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.020 [2024-10-14 14:39:54.462088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.020 [2024-10-14 14:39:54.462095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.020 [2024-10-14 14:39:54.462103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.020 [2024-10-14 14:39:54.462111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.020 [2024-10-14 14:39:54.462119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.020 [2024-10-14 14:39:54.462126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f78d40 is same with the state(6) to be set 00:26:14.020 [2024-10-14 14:39:54.472011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f78d40 (9): Bad file descriptor 00:26:14.020 14:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.020 14:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.020 14:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.020 14:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.020 14:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.020 14:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.020 14:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.020 [2024-10-14 14:39:54.482053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:14.963 [2024-10-14 14:39:55.497087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:14.963 [2024-10-14 14:39:55.497126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f78d40 with addr=10.0.0.2, port=4420 00:26:14.963 [2024-10-14 14:39:55.497137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f78d40 is same with the state(6) to be set 00:26:14.963 [2024-10-14 14:39:55.497158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f78d40 (9): Bad file descriptor 00:26:14.963 [2024-10-14 14:39:55.497525] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:14.963 [2024-10-14 14:39:55.497549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:14.963 [2024-10-14 14:39:55.497557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:14.963 [2024-10-14 14:39:55.497570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:14.963 [2024-10-14 14:39:55.497586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:14.963 [2024-10-14 14:39:55.497595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:14.963 14:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.963 14:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:14.963 14:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:15.907 [2024-10-14 14:39:56.499969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:15.907 [2024-10-14 14:39:56.499992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:15.907 [2024-10-14 14:39:56.500001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:15.907 [2024-10-14 14:39:56.500009] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:15.907 [2024-10-14 14:39:56.500023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:15.907 [2024-10-14 14:39:56.500043] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:15.907 [2024-10-14 14:39:56.500071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.907 [2024-10-14 14:39:56.500081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.907 [2024-10-14 14:39:56.500092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.907 [2024-10-14 14:39:56.500099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.907 [2024-10-14 14:39:56.500107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.907 [2024-10-14 14:39:56.500115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.907 [2024-10-14 14:39:56.500123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.907 [2024-10-14 14:39:56.500130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.907 [2024-10-14 14:39:56.500139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:15.907 [2024-10-14 14:39:56.500146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.907 [2024-10-14 14:39:56.500153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:15.907 [2024-10-14 14:39:56.500592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f68480 (9): Bad file descriptor 00:26:15.907 [2024-10-14 14:39:56.501606] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:15.907 [2024-10-14 14:39:56.501617] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:15.907 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.907 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.907 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.907 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.907 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.907 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.907 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.907 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.907 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:15.907 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.907 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.168 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:16.168 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.168 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.168 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.168 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.168 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.168 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.168 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.168 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.168 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:16.168 14:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:17.110 14:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:17.110 14:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.110 14:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:17.110 14:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.110 14:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:17.110 14:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.110 14:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:17.110 14:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.110 14:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:17.110 14:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.057 [2024-10-14 14:39:58.553195] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:18.057 [2024-10-14 14:39:58.553213] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:18.057 [2024-10-14 14:39:58.553227] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:18.057 [2024-10-14 14:39:58.680642] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:18.323 14:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:18.323 14:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:18.323 14:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.323 14:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:18.323 14:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.323 14:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.323 14:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:18.323 14:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.323 14:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:18.323 14:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.323 [2024-10-14 14:39:58.863859] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:18.323 [2024-10-14 14:39:58.863902] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:18.323 [2024-10-14 14:39:58.863923] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:18.323 [2024-10-14 14:39:58.863936] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:18.323 [2024-10-14 14:39:58.863945] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:18.323 [2024-10-14 14:39:58.870876] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1f83160 was disconnected and freed. delete nvme_qpair. 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3526557 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3526557 ']' 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3526557 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3526557 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3526557' 00:26:19.264 killing process with pid 3526557 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3526557 00:26:19.264 14:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3526557 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:19.525 rmmod nvme_tcp 00:26:19.525 rmmod nvme_fabrics 00:26:19.525 rmmod nvme_keyring 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 3526449 ']' 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 3526449 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3526449 ']' 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3526449 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3526449 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3526449' 00:26:19.525 killing process with pid 3526449 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3526449 00:26:19.525 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3526449 00:26:19.787 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:19.787 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:19.787 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:19.787 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:19.787 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:19.787 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:26:19.787 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:26:19.787 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:19.787 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:19.787 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.787 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.787 14:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.698 14:40:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:21.698 00:26:21.698 real 0m23.611s 00:26:21.698 user 0m27.978s 00:26:21.698 sys 0m6.949s 00:26:21.698 14:40:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:21.698 14:40:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.698 ************************************ 00:26:21.698 END TEST nvmf_discovery_remove_ifc 00:26:21.698 ************************************ 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.960 ************************************ 00:26:21.960 START TEST nvmf_identify_kernel_target 00:26:21.960 ************************************ 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:21.960 * Looking for test storage... 00:26:21.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:21.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.960 --rc genhtml_branch_coverage=1 00:26:21.960 --rc genhtml_function_coverage=1 00:26:21.960 --rc genhtml_legend=1 00:26:21.960 --rc geninfo_all_blocks=1 00:26:21.960 --rc geninfo_unexecuted_blocks=1 00:26:21.960 00:26:21.960 ' 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:21.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.960 --rc genhtml_branch_coverage=1 00:26:21.960 --rc genhtml_function_coverage=1 00:26:21.960 --rc genhtml_legend=1 00:26:21.960 --rc geninfo_all_blocks=1 00:26:21.960 --rc geninfo_unexecuted_blocks=1 00:26:21.960 00:26:21.960 ' 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:21.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.960 --rc genhtml_branch_coverage=1 00:26:21.960 --rc genhtml_function_coverage=1 00:26:21.960 --rc genhtml_legend=1 00:26:21.960 --rc geninfo_all_blocks=1 00:26:21.960 --rc geninfo_unexecuted_blocks=1 00:26:21.960 00:26:21.960 ' 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:21.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.960 --rc genhtml_branch_coverage=1 00:26:21.960 --rc genhtml_function_coverage=1 00:26:21.960 --rc genhtml_legend=1 00:26:21.960 --rc geninfo_all_blocks=1 00:26:21.960 --rc geninfo_unexecuted_blocks=1 00:26:21.960 00:26:21.960 ' 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.960 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:21.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:21.961 14:40:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:30.253 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:30.253 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:30.253 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:30.254 Found net devices under 0000:31:00.0: cvl_0_0 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:30.254 Found net devices under 0000:31:00.1: cvl_0_1 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:30.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:26:30.254 00:26:30.254 --- 10.0.0.2 ping statistics --- 00:26:30.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.254 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:26:30.254 00:26:30.254 --- 10.0.0.1 ping statistics --- 00:26:30.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.254 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:30.254 14:40:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:30.254 14:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:30.254 14:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:32.797 Waiting for block devices as requested 00:26:32.797 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:32.797 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:32.797 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:32.797 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:32.797 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:32.797 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:32.797 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:33.057 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:33.057 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:33.318 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:33.318 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:33.318 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:33.579 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:33.579 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:33.579 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:33.579 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:33.840 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:34.103 No valid GPT data, bailing 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:34.103 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:26:34.365 00:26:34.365 Discovery Log Number of Records 2, Generation counter 2 00:26:34.365 =====Discovery Log Entry 0====== 00:26:34.365 trtype: tcp 00:26:34.365 adrfam: ipv4 00:26:34.365 subtype: current discovery subsystem 00:26:34.366 treq: not specified, sq flow control disable supported 00:26:34.366 portid: 1 00:26:34.366 trsvcid: 4420 00:26:34.366 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:34.366 traddr: 10.0.0.1 00:26:34.366 eflags: none 00:26:34.366 sectype: none 00:26:34.366 =====Discovery Log Entry 1====== 00:26:34.366 trtype: tcp 00:26:34.366 adrfam: ipv4 00:26:34.366 subtype: nvme subsystem 00:26:34.366 treq: not specified, sq flow control disable supported 00:26:34.366 portid: 1 00:26:34.366 trsvcid: 4420 00:26:34.366 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:34.366 traddr: 10.0.0.1 00:26:34.366 eflags: none 00:26:34.366 sectype: none 00:26:34.366 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:34.366 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:34.366 ===================================================== 00:26:34.366 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:34.366 ===================================================== 00:26:34.366 Controller Capabilities/Features 00:26:34.366 ================================ 00:26:34.366 Vendor ID: 0000 00:26:34.366 Subsystem Vendor ID: 0000 00:26:34.366 Serial Number: efff2c7cfde2c9d18b54 00:26:34.366 Model Number: Linux 00:26:34.366 Firmware Version: 6.8.9-20 00:26:34.366 Recommended Arb Burst: 0 00:26:34.366 IEEE OUI Identifier: 00 00 00 00:26:34.366 Multi-path I/O 00:26:34.366 May have multiple subsystem ports: No 00:26:34.366 May have multiple controllers: No 00:26:34.366 Associated with SR-IOV VF: No 00:26:34.366 Max Data Transfer Size: Unlimited 00:26:34.366 Max Number of Namespaces: 0 00:26:34.366 Max Number of I/O Queues: 1024 00:26:34.366 NVMe Specification Version (VS): 1.3 00:26:34.366 NVMe Specification Version (Identify): 1.3 00:26:34.366 Maximum Queue Entries: 1024 00:26:34.366 Contiguous Queues Required: No 00:26:34.366 Arbitration Mechanisms Supported 00:26:34.366 Weighted Round Robin: Not Supported 00:26:34.366 Vendor Specific: Not Supported 00:26:34.366 Reset Timeout: 7500 ms 00:26:34.366 Doorbell Stride: 4 bytes 00:26:34.366 NVM Subsystem Reset: Not Supported 00:26:34.366 Command Sets Supported 00:26:34.366 NVM Command Set: Supported 00:26:34.366 Boot Partition: Not Supported 00:26:34.366 Memory Page Size Minimum: 4096 bytes 00:26:34.366 Memory Page Size Maximum: 4096 bytes 00:26:34.366 Persistent Memory Region: Not Supported 00:26:34.366 Optional Asynchronous Events Supported 00:26:34.366 Namespace Attribute Notices: Not Supported 00:26:34.366 Firmware Activation Notices: Not Supported 00:26:34.366 ANA Change Notices: Not Supported 00:26:34.366 PLE Aggregate Log Change Notices: Not Supported 00:26:34.366 LBA Status Info Alert Notices: Not Supported 00:26:34.366 EGE Aggregate Log Change Notices: Not Supported 00:26:34.366 Normal NVM Subsystem Shutdown event: Not Supported 00:26:34.366 Zone Descriptor Change Notices: Not Supported 00:26:34.366 Discovery Log Change Notices: Supported 00:26:34.366 Controller Attributes 00:26:34.366 128-bit Host Identifier: Not Supported 00:26:34.366 Non-Operational Permissive Mode: Not Supported 00:26:34.366 NVM Sets: Not Supported 00:26:34.366 Read Recovery Levels: Not Supported 00:26:34.366 Endurance Groups: Not Supported 00:26:34.366 Predictable Latency Mode: Not Supported 00:26:34.366 Traffic Based Keep ALive: Not Supported 00:26:34.366 Namespace Granularity: Not Supported 00:26:34.366 SQ Associations: Not Supported 00:26:34.366 UUID List: Not Supported 00:26:34.366 Multi-Domain Subsystem: Not Supported 00:26:34.366 Fixed Capacity Management: Not Supported 00:26:34.366 Variable Capacity Management: Not Supported 00:26:34.366 Delete Endurance Group: Not Supported 00:26:34.366 Delete NVM Set: Not Supported 00:26:34.366 Extended LBA Formats Supported: Not Supported 00:26:34.366 Flexible Data Placement Supported: Not Supported 00:26:34.366 00:26:34.366 Controller Memory Buffer Support 00:26:34.366 ================================ 00:26:34.366 Supported: No 00:26:34.366 00:26:34.366 Persistent Memory Region Support 00:26:34.366 ================================ 00:26:34.366 Supported: No 00:26:34.366 00:26:34.366 Admin Command Set Attributes 00:26:34.366 ============================ 00:26:34.366 Security Send/Receive: Not Supported 00:26:34.366 Format NVM: Not Supported 00:26:34.366 Firmware Activate/Download: Not Supported 00:26:34.366 Namespace Management: Not Supported 00:26:34.366 Device Self-Test: Not Supported 00:26:34.366 Directives: Not Supported 00:26:34.366 NVMe-MI: Not Supported 00:26:34.366 Virtualization Management: Not Supported 00:26:34.366 Doorbell Buffer Config: Not Supported 00:26:34.366 Get LBA Status Capability: Not Supported 00:26:34.366 Command & Feature Lockdown Capability: Not Supported 00:26:34.366 Abort Command Limit: 1 00:26:34.366 Async Event Request Limit: 1 00:26:34.366 Number of Firmware Slots: N/A 00:26:34.366 Firmware Slot 1 Read-Only: N/A 00:26:34.366 Firmware Activation Without Reset: N/A 00:26:34.366 Multiple Update Detection Support: N/A 00:26:34.366 Firmware Update Granularity: No Information Provided 00:26:34.366 Per-Namespace SMART Log: No 00:26:34.366 Asymmetric Namespace Access Log Page: Not Supported 00:26:34.366 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:34.366 Command Effects Log Page: Not Supported 00:26:34.366 Get Log Page Extended Data: Supported 00:26:34.366 Telemetry Log Pages: Not Supported 00:26:34.366 Persistent Event Log Pages: Not Supported 00:26:34.366 Supported Log Pages Log Page: May Support 00:26:34.366 Commands Supported & Effects Log Page: Not Supported 00:26:34.366 Feature Identifiers & Effects Log Page:May Support 00:26:34.366 NVMe-MI Commands & Effects Log Page: May Support 00:26:34.366 Data Area 4 for Telemetry Log: Not Supported 00:26:34.366 Error Log Page Entries Supported: 1 00:26:34.366 Keep Alive: Not Supported 00:26:34.366 00:26:34.366 NVM Command Set Attributes 00:26:34.366 ========================== 00:26:34.366 Submission Queue Entry Size 00:26:34.366 Max: 1 00:26:34.366 Min: 1 00:26:34.366 Completion Queue Entry Size 00:26:34.366 Max: 1 00:26:34.366 Min: 1 00:26:34.366 Number of Namespaces: 0 00:26:34.366 Compare Command: Not Supported 00:26:34.366 Write Uncorrectable Command: Not Supported 00:26:34.366 Dataset Management Command: Not Supported 00:26:34.366 Write Zeroes Command: Not Supported 00:26:34.366 Set Features Save Field: Not Supported 00:26:34.366 Reservations: Not Supported 00:26:34.366 Timestamp: Not Supported 00:26:34.366 Copy: Not Supported 00:26:34.366 Volatile Write Cache: Not Present 00:26:34.366 Atomic Write Unit (Normal): 1 00:26:34.366 Atomic Write Unit (PFail): 1 00:26:34.366 Atomic Compare & Write Unit: 1 00:26:34.366 Fused Compare & Write: Not Supported 00:26:34.366 Scatter-Gather List 00:26:34.366 SGL Command Set: Supported 00:26:34.366 SGL Keyed: Not Supported 00:26:34.366 SGL Bit Bucket Descriptor: Not Supported 00:26:34.366 SGL Metadata Pointer: Not Supported 00:26:34.366 Oversized SGL: Not Supported 00:26:34.366 SGL Metadata Address: Not Supported 00:26:34.366 SGL Offset: Supported 00:26:34.366 Transport SGL Data Block: Not Supported 00:26:34.366 Replay Protected Memory Block: Not Supported 00:26:34.366 00:26:34.366 Firmware Slot Information 00:26:34.366 ========================= 00:26:34.366 Active slot: 0 00:26:34.366 00:26:34.366 00:26:34.366 Error Log 00:26:34.366 ========= 00:26:34.366 00:26:34.366 Active Namespaces 00:26:34.366 ================= 00:26:34.366 Discovery Log Page 00:26:34.366 ================== 00:26:34.366 Generation Counter: 2 00:26:34.366 Number of Records: 2 00:26:34.366 Record Format: 0 00:26:34.366 00:26:34.366 Discovery Log Entry 0 00:26:34.366 ---------------------- 00:26:34.366 Transport Type: 3 (TCP) 00:26:34.366 Address Family: 1 (IPv4) 00:26:34.366 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:34.366 Entry Flags: 00:26:34.366 Duplicate Returned Information: 0 00:26:34.366 Explicit Persistent Connection Support for Discovery: 0 00:26:34.366 Transport Requirements: 00:26:34.366 Secure Channel: Not Specified 00:26:34.366 Port ID: 1 (0x0001) 00:26:34.366 Controller ID: 65535 (0xffff) 00:26:34.366 Admin Max SQ Size: 32 00:26:34.366 Transport Service Identifier: 4420 00:26:34.366 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:34.366 Transport Address: 10.0.0.1 00:26:34.366 Discovery Log Entry 1 00:26:34.366 ---------------------- 00:26:34.366 Transport Type: 3 (TCP) 00:26:34.366 Address Family: 1 (IPv4) 00:26:34.366 Subsystem Type: 2 (NVM Subsystem) 00:26:34.366 Entry Flags: 00:26:34.366 Duplicate Returned Information: 0 00:26:34.366 Explicit Persistent Connection Support for Discovery: 0 00:26:34.366 Transport Requirements: 00:26:34.366 Secure Channel: Not Specified 00:26:34.366 Port ID: 1 (0x0001) 00:26:34.366 Controller ID: 65535 (0xffff) 00:26:34.366 Admin Max SQ Size: 32 00:26:34.366 Transport Service Identifier: 4420 00:26:34.367 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:34.367 Transport Address: 10.0.0.1 00:26:34.367 14:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:34.367 get_feature(0x01) failed 00:26:34.367 get_feature(0x02) failed 00:26:34.367 get_feature(0x04) failed 00:26:34.367 ===================================================== 00:26:34.367 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:34.367 ===================================================== 00:26:34.367 Controller Capabilities/Features 00:26:34.367 ================================ 00:26:34.367 Vendor ID: 0000 00:26:34.367 Subsystem Vendor ID: 0000 00:26:34.367 Serial Number: 70f958c61cd22c649628 00:26:34.367 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:34.367 Firmware Version: 6.8.9-20 00:26:34.367 Recommended Arb Burst: 6 00:26:34.367 IEEE OUI Identifier: 00 00 00 00:26:34.367 Multi-path I/O 00:26:34.367 May have multiple subsystem ports: Yes 00:26:34.367 May have multiple controllers: Yes 00:26:34.367 Associated with SR-IOV VF: No 00:26:34.367 Max Data Transfer Size: Unlimited 00:26:34.367 Max Number of Namespaces: 1024 00:26:34.367 Max Number of I/O Queues: 128 00:26:34.367 NVMe Specification Version (VS): 1.3 00:26:34.367 NVMe Specification Version (Identify): 1.3 00:26:34.367 Maximum Queue Entries: 1024 00:26:34.367 Contiguous Queues Required: No 00:26:34.367 Arbitration Mechanisms Supported 00:26:34.367 Weighted Round Robin: Not Supported 00:26:34.367 Vendor Specific: Not Supported 00:26:34.367 Reset Timeout: 7500 ms 00:26:34.367 Doorbell Stride: 4 bytes 00:26:34.367 NVM Subsystem Reset: Not Supported 00:26:34.367 Command Sets Supported 00:26:34.367 NVM Command Set: Supported 00:26:34.367 Boot Partition: Not Supported 00:26:34.367 Memory Page Size Minimum: 4096 bytes 00:26:34.367 Memory Page Size Maximum: 4096 bytes 00:26:34.367 Persistent Memory Region: Not Supported 00:26:34.367 Optional Asynchronous Events Supported 00:26:34.367 Namespace Attribute Notices: Supported 00:26:34.367 Firmware Activation Notices: Not Supported 00:26:34.367 ANA Change Notices: Supported 00:26:34.367 PLE Aggregate Log Change Notices: Not Supported 00:26:34.367 LBA Status Info Alert Notices: Not Supported 00:26:34.367 EGE Aggregate Log Change Notices: Not Supported 00:26:34.367 Normal NVM Subsystem Shutdown event: Not Supported 00:26:34.367 Zone Descriptor Change Notices: Not Supported 00:26:34.367 Discovery Log Change Notices: Not Supported 00:26:34.367 Controller Attributes 00:26:34.367 128-bit Host Identifier: Supported 00:26:34.367 Non-Operational Permissive Mode: Not Supported 00:26:34.367 NVM Sets: Not Supported 00:26:34.367 Read Recovery Levels: Not Supported 00:26:34.367 Endurance Groups: Not Supported 00:26:34.367 Predictable Latency Mode: Not Supported 00:26:34.367 Traffic Based Keep ALive: Supported 00:26:34.367 Namespace Granularity: Not Supported 00:26:34.367 SQ Associations: Not Supported 00:26:34.367 UUID List: Not Supported 00:26:34.367 Multi-Domain Subsystem: Not Supported 00:26:34.367 Fixed Capacity Management: Not Supported 00:26:34.367 Variable Capacity Management: Not Supported 00:26:34.367 Delete Endurance Group: Not Supported 00:26:34.367 Delete NVM Set: Not Supported 00:26:34.367 Extended LBA Formats Supported: Not Supported 00:26:34.367 Flexible Data Placement Supported: Not Supported 00:26:34.367 00:26:34.367 Controller Memory Buffer Support 00:26:34.367 ================================ 00:26:34.367 Supported: No 00:26:34.367 00:26:34.367 Persistent Memory Region Support 00:26:34.367 ================================ 00:26:34.367 Supported: No 00:26:34.367 00:26:34.367 Admin Command Set Attributes 00:26:34.367 ============================ 00:26:34.367 Security Send/Receive: Not Supported 00:26:34.367 Format NVM: Not Supported 00:26:34.367 Firmware Activate/Download: Not Supported 00:26:34.367 Namespace Management: Not Supported 00:26:34.367 Device Self-Test: Not Supported 00:26:34.367 Directives: Not Supported 00:26:34.367 NVMe-MI: Not Supported 00:26:34.367 Virtualization Management: Not Supported 00:26:34.367 Doorbell Buffer Config: Not Supported 00:26:34.367 Get LBA Status Capability: Not Supported 00:26:34.367 Command & Feature Lockdown Capability: Not Supported 00:26:34.367 Abort Command Limit: 4 00:26:34.367 Async Event Request Limit: 4 00:26:34.367 Number of Firmware Slots: N/A 00:26:34.367 Firmware Slot 1 Read-Only: N/A 00:26:34.367 Firmware Activation Without Reset: N/A 00:26:34.367 Multiple Update Detection Support: N/A 00:26:34.367 Firmware Update Granularity: No Information Provided 00:26:34.367 Per-Namespace SMART Log: Yes 00:26:34.367 Asymmetric Namespace Access Log Page: Supported 00:26:34.367 ANA Transition Time : 10 sec 00:26:34.367 00:26:34.367 Asymmetric Namespace Access Capabilities 00:26:34.367 ANA Optimized State : Supported 00:26:34.367 ANA Non-Optimized State : Supported 00:26:34.367 ANA Inaccessible State : Supported 00:26:34.367 ANA Persistent Loss State : Supported 00:26:34.367 ANA Change State : Supported 00:26:34.367 ANAGRPID is not changed : No 00:26:34.367 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:34.367 00:26:34.367 ANA Group Identifier Maximum : 128 00:26:34.367 Number of ANA Group Identifiers : 128 00:26:34.367 Max Number of Allowed Namespaces : 1024 00:26:34.367 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:34.367 Command Effects Log Page: Supported 00:26:34.367 Get Log Page Extended Data: Supported 00:26:34.367 Telemetry Log Pages: Not Supported 00:26:34.367 Persistent Event Log Pages: Not Supported 00:26:34.367 Supported Log Pages Log Page: May Support 00:26:34.367 Commands Supported & Effects Log Page: Not Supported 00:26:34.367 Feature Identifiers & Effects Log Page:May Support 00:26:34.367 NVMe-MI Commands & Effects Log Page: May Support 00:26:34.367 Data Area 4 for Telemetry Log: Not Supported 00:26:34.367 Error Log Page Entries Supported: 128 00:26:34.367 Keep Alive: Supported 00:26:34.367 Keep Alive Granularity: 1000 ms 00:26:34.367 00:26:34.367 NVM Command Set Attributes 00:26:34.367 ========================== 00:26:34.367 Submission Queue Entry Size 00:26:34.367 Max: 64 00:26:34.367 Min: 64 00:26:34.367 Completion Queue Entry Size 00:26:34.367 Max: 16 00:26:34.367 Min: 16 00:26:34.367 Number of Namespaces: 1024 00:26:34.367 Compare Command: Not Supported 00:26:34.367 Write Uncorrectable Command: Not Supported 00:26:34.367 Dataset Management Command: Supported 00:26:34.367 Write Zeroes Command: Supported 00:26:34.367 Set Features Save Field: Not Supported 00:26:34.367 Reservations: Not Supported 00:26:34.367 Timestamp: Not Supported 00:26:34.367 Copy: Not Supported 00:26:34.367 Volatile Write Cache: Present 00:26:34.367 Atomic Write Unit (Normal): 1 00:26:34.367 Atomic Write Unit (PFail): 1 00:26:34.367 Atomic Compare & Write Unit: 1 00:26:34.367 Fused Compare & Write: Not Supported 00:26:34.367 Scatter-Gather List 00:26:34.367 SGL Command Set: Supported 00:26:34.367 SGL Keyed: Not Supported 00:26:34.367 SGL Bit Bucket Descriptor: Not Supported 00:26:34.367 SGL Metadata Pointer: Not Supported 00:26:34.367 Oversized SGL: Not Supported 00:26:34.367 SGL Metadata Address: Not Supported 00:26:34.367 SGL Offset: Supported 00:26:34.367 Transport SGL Data Block: Not Supported 00:26:34.367 Replay Protected Memory Block: Not Supported 00:26:34.367 00:26:34.367 Firmware Slot Information 00:26:34.367 ========================= 00:26:34.367 Active slot: 0 00:26:34.367 00:26:34.367 Asymmetric Namespace Access 00:26:34.367 =========================== 00:26:34.367 Change Count : 0 00:26:34.367 Number of ANA Group Descriptors : 1 00:26:34.367 ANA Group Descriptor : 0 00:26:34.367 ANA Group ID : 1 00:26:34.367 Number of NSID Values : 1 00:26:34.367 Change Count : 0 00:26:34.367 ANA State : 1 00:26:34.367 Namespace Identifier : 1 00:26:34.367 00:26:34.367 Commands Supported and Effects 00:26:34.367 ============================== 00:26:34.367 Admin Commands 00:26:34.367 -------------- 00:26:34.367 Get Log Page (02h): Supported 00:26:34.367 Identify (06h): Supported 00:26:34.367 Abort (08h): Supported 00:26:34.367 Set Features (09h): Supported 00:26:34.367 Get Features (0Ah): Supported 00:26:34.367 Asynchronous Event Request (0Ch): Supported 00:26:34.367 Keep Alive (18h): Supported 00:26:34.367 I/O Commands 00:26:34.367 ------------ 00:26:34.367 Flush (00h): Supported 00:26:34.367 Write (01h): Supported LBA-Change 00:26:34.367 Read (02h): Supported 00:26:34.367 Write Zeroes (08h): Supported LBA-Change 00:26:34.367 Dataset Management (09h): Supported 00:26:34.367 00:26:34.367 Error Log 00:26:34.367 ========= 00:26:34.367 Entry: 0 00:26:34.367 Error Count: 0x3 00:26:34.367 Submission Queue Id: 0x0 00:26:34.367 Command Id: 0x5 00:26:34.367 Phase Bit: 0 00:26:34.367 Status Code: 0x2 00:26:34.367 Status Code Type: 0x0 00:26:34.367 Do Not Retry: 1 00:26:34.367 Error Location: 0x28 00:26:34.367 LBA: 0x0 00:26:34.367 Namespace: 0x0 00:26:34.367 Vendor Log Page: 0x0 00:26:34.367 ----------- 00:26:34.367 Entry: 1 00:26:34.367 Error Count: 0x2 00:26:34.367 Submission Queue Id: 0x0 00:26:34.368 Command Id: 0x5 00:26:34.368 Phase Bit: 0 00:26:34.368 Status Code: 0x2 00:26:34.368 Status Code Type: 0x0 00:26:34.368 Do Not Retry: 1 00:26:34.368 Error Location: 0x28 00:26:34.368 LBA: 0x0 00:26:34.368 Namespace: 0x0 00:26:34.368 Vendor Log Page: 0x0 00:26:34.368 ----------- 00:26:34.368 Entry: 2 00:26:34.368 Error Count: 0x1 00:26:34.368 Submission Queue Id: 0x0 00:26:34.368 Command Id: 0x4 00:26:34.368 Phase Bit: 0 00:26:34.368 Status Code: 0x2 00:26:34.368 Status Code Type: 0x0 00:26:34.368 Do Not Retry: 1 00:26:34.368 Error Location: 0x28 00:26:34.368 LBA: 0x0 00:26:34.368 Namespace: 0x0 00:26:34.368 Vendor Log Page: 0x0 00:26:34.368 00:26:34.368 Number of Queues 00:26:34.368 ================ 00:26:34.368 Number of I/O Submission Queues: 128 00:26:34.368 Number of I/O Completion Queues: 128 00:26:34.368 00:26:34.368 ZNS Specific Controller Data 00:26:34.368 ============================ 00:26:34.368 Zone Append Size Limit: 0 00:26:34.368 00:26:34.368 00:26:34.368 Active Namespaces 00:26:34.368 ================= 00:26:34.368 get_feature(0x05) failed 00:26:34.368 Namespace ID:1 00:26:34.368 Command Set Identifier: NVM (00h) 00:26:34.368 Deallocate: Supported 00:26:34.368 Deallocated/Unwritten Error: Not Supported 00:26:34.368 Deallocated Read Value: Unknown 00:26:34.368 Deallocate in Write Zeroes: Not Supported 00:26:34.368 Deallocated Guard Field: 0xFFFF 00:26:34.368 Flush: Supported 00:26:34.368 Reservation: Not Supported 00:26:34.368 Namespace Sharing Capabilities: Multiple Controllers 00:26:34.368 Size (in LBAs): 3750748848 (1788GiB) 00:26:34.368 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:34.368 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:34.368 UUID: daaf5500-5e96-48e4-bffa-bdebf998c2b6 00:26:34.368 Thin Provisioning: Not Supported 00:26:34.368 Per-NS Atomic Units: Yes 00:26:34.368 Atomic Write Unit (Normal): 8 00:26:34.368 Atomic Write Unit (PFail): 8 00:26:34.368 Preferred Write Granularity: 8 00:26:34.368 Atomic Compare & Write Unit: 8 00:26:34.368 Atomic Boundary Size (Normal): 0 00:26:34.368 Atomic Boundary Size (PFail): 0 00:26:34.368 Atomic Boundary Offset: 0 00:26:34.368 NGUID/EUI64 Never Reused: No 00:26:34.368 ANA group ID: 1 00:26:34.368 Namespace Write Protected: No 00:26:34.368 Number of LBA Formats: 1 00:26:34.368 Current LBA Format: LBA Format #00 00:26:34.368 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:34.368 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:34.368 rmmod nvme_tcp 00:26:34.368 rmmod nvme_fabrics 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.368 14:40:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.912 14:40:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:36.912 14:40:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:36.912 14:40:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:36.912 14:40:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:26:36.912 14:40:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:36.912 14:40:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:36.912 14:40:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:36.912 14:40:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:36.912 14:40:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:26:36.912 14:40:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:26:36.912 14:40:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:40.212 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:40.212 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:40.473 00:26:40.473 real 0m18.590s 00:26:40.473 user 0m4.654s 00:26:40.473 sys 0m10.740s 00:26:40.473 14:40:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:40.473 14:40:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:40.473 ************************************ 00:26:40.473 END TEST nvmf_identify_kernel_target 00:26:40.473 ************************************ 00:26:40.473 14:40:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:40.473 14:40:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:40.473 14:40:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:40.473 14:40:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.473 ************************************ 00:26:40.473 START TEST nvmf_auth_host 00:26:40.473 ************************************ 00:26:40.473 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:40.735 * Looking for test storage... 00:26:40.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:40.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.736 --rc genhtml_branch_coverage=1 00:26:40.736 --rc genhtml_function_coverage=1 00:26:40.736 --rc genhtml_legend=1 00:26:40.736 --rc geninfo_all_blocks=1 00:26:40.736 --rc geninfo_unexecuted_blocks=1 00:26:40.736 00:26:40.736 ' 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:40.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.736 --rc genhtml_branch_coverage=1 00:26:40.736 --rc genhtml_function_coverage=1 00:26:40.736 --rc genhtml_legend=1 00:26:40.736 --rc geninfo_all_blocks=1 00:26:40.736 --rc geninfo_unexecuted_blocks=1 00:26:40.736 00:26:40.736 ' 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:40.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.736 --rc genhtml_branch_coverage=1 00:26:40.736 --rc genhtml_function_coverage=1 00:26:40.736 --rc genhtml_legend=1 00:26:40.736 --rc geninfo_all_blocks=1 00:26:40.736 --rc geninfo_unexecuted_blocks=1 00:26:40.736 00:26:40.736 ' 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:40.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.736 --rc genhtml_branch_coverage=1 00:26:40.736 --rc genhtml_function_coverage=1 00:26:40.736 --rc genhtml_legend=1 00:26:40.736 --rc geninfo_all_blocks=1 00:26:40.736 --rc geninfo_unexecuted_blocks=1 00:26:40.736 00:26:40.736 ' 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:40.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:40.736 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:40.737 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:40.737 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.737 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.737 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.737 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:40.737 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:40.737 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:40.737 14:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.877 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:48.878 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:48.878 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:48.878 Found net devices under 0000:31:00.0: cvl_0_0 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:48.878 Found net devices under 0000:31:00.1: cvl_0_1 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:26:48.878 00:26:48.878 --- 10.0.0.2 ping statistics --- 00:26:48.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.878 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:26:48.878 00:26:48.878 --- 10.0.0.1 ping statistics --- 00:26:48.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.878 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=3541051 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 3541051 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3541051 ']' 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:48.878 14:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=d72f4ab2d953481d8fe29b8db255aaf2 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Yw4 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key d72f4ab2d953481d8fe29b8db255aaf2 0 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 d72f4ab2d953481d8fe29b8db255aaf2 0 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=d72f4ab2d953481d8fe29b8db255aaf2 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:49.138 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Yw4 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Yw4 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Yw4 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=ffc0988653fc49d4511e10e869882246416415d28eea1de9659c46ff27898270 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.87v 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key ffc0988653fc49d4511e10e869882246416415d28eea1de9659c46ff27898270 3 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 ffc0988653fc49d4511e10e869882246416415d28eea1de9659c46ff27898270 3 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=ffc0988653fc49d4511e10e869882246416415d28eea1de9659c46ff27898270 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.87v 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.87v 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.87v 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6c96c59a88b02911fc4fa912e35ffc2c06c3010996baeb65 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.zwC 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6c96c59a88b02911fc4fa912e35ffc2c06c3010996baeb65 0 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6c96c59a88b02911fc4fa912e35ffc2c06c3010996baeb65 0 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6c96c59a88b02911fc4fa912e35ffc2c06c3010996baeb65 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:49.398 14:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.zwC 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.zwC 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.zwC 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=87438c16ace0a567a5066414bbc3d69af1928dcc03a1b7eb 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.iw6 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 87438c16ace0a567a5066414bbc3d69af1928dcc03a1b7eb 2 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 87438c16ace0a567a5066414bbc3d69af1928dcc03a1b7eb 2 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=87438c16ace0a567a5066414bbc3d69af1928dcc03a1b7eb 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.iw6 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.iw6 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.iw6 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=2a96a72d603e6038129b5c5537717fcc 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.ODs 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 2a96a72d603e6038129b5c5537717fcc 1 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 2a96a72d603e6038129b5c5537717fcc 1 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.398 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.399 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=2a96a72d603e6038129b5c5537717fcc 00:26:49.399 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:49.399 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.ODs 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.ODs 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ODs 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b5554846d9fd32e2f762986faf5dab1a 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.uxg 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b5554846d9fd32e2f762986faf5dab1a 1 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b5554846d9fd32e2f762986faf5dab1a 1 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b5554846d9fd32e2f762986faf5dab1a 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.uxg 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.uxg 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.uxg 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=d3a53a80478ab954562f3d4f7725c7f4d116c479b72f1982 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.ObW 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key d3a53a80478ab954562f3d4f7725c7f4d116c479b72f1982 2 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 d3a53a80478ab954562f3d4f7725c7f4d116c479b72f1982 2 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=d3a53a80478ab954562f3d4f7725c7f4d116c479b72f1982 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.ObW 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.ObW 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ObW 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:49.658 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=fc260ce65ca4bff77b3a70ca23520d8f 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.y9Y 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key fc260ce65ca4bff77b3a70ca23520d8f 0 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 fc260ce65ca4bff77b3a70ca23520d8f 0 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=fc260ce65ca4bff77b3a70ca23520d8f 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.y9Y 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.y9Y 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.y9Y 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6ed30108cb3f61c97c6ee47f70ff3a842db3ba9407dd63d564a700013ca2970f 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.LWA 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6ed30108cb3f61c97c6ee47f70ff3a842db3ba9407dd63d564a700013ca2970f 3 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6ed30108cb3f61c97c6ee47f70ff3a842db3ba9407dd63d564a700013ca2970f 3 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6ed30108cb3f61c97c6ee47f70ff3a842db3ba9407dd63d564a700013ca2970f 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:49.659 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.LWA 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.LWA 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.LWA 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3541051 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3541051 ']' 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Yw4 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.87v ]] 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.87v 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.zwC 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.iw6 ]] 00:26:49.918 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iw6 00:26:49.919 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.919 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.919 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.919 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:49.919 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ODs 00:26:49.919 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.919 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.179 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.179 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.uxg ]] 00:26:50.179 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uxg 00:26:50.179 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.179 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.179 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.179 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:50.179 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ObW 00:26:50.179 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.179 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.y9Y ]] 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.y9Y 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.LWA 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:50.180 14:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:53.479 Waiting for block devices as requested 00:26:53.479 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:53.479 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:53.740 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:53.740 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:53.740 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:54.000 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:54.000 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:54.000 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:54.261 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:54.261 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:54.521 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:54.521 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:54.521 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:54.521 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:54.781 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:54.781 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:54.781 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:55.721 No valid GPT data, bailing 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:55.721 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:26:55.980 00:26:55.980 Discovery Log Number of Records 2, Generation counter 2 00:26:55.980 =====Discovery Log Entry 0====== 00:26:55.980 trtype: tcp 00:26:55.980 adrfam: ipv4 00:26:55.980 subtype: current discovery subsystem 00:26:55.980 treq: not specified, sq flow control disable supported 00:26:55.980 portid: 1 00:26:55.980 trsvcid: 4420 00:26:55.980 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:55.980 traddr: 10.0.0.1 00:26:55.980 eflags: none 00:26:55.980 sectype: none 00:26:55.980 =====Discovery Log Entry 1====== 00:26:55.980 trtype: tcp 00:26:55.980 adrfam: ipv4 00:26:55.980 subtype: nvme subsystem 00:26:55.980 treq: not specified, sq flow control disable supported 00:26:55.980 portid: 1 00:26:55.980 trsvcid: 4420 00:26:55.980 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:55.980 traddr: 10.0.0.1 00:26:55.980 eflags: none 00:26:55.980 sectype: none 00:26:55.980 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:55.980 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:55.980 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:55.980 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:55.980 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.981 nvme0n1 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.981 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.241 nvme0n1 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.241 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.501 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.502 14:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.502 nvme0n1 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.502 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.763 nvme0n1 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.763 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.023 nvme0n1 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.023 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.283 nvme0n1 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:26:57.283 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.284 14:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.544 nvme0n1 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.544 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.804 nvme0n1 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:26:57.804 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.805 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.065 nvme0n1 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.065 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.066 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.326 nvme0n1 00:26:58.326 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.326 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.326 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.326 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.326 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.326 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.326 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.326 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.326 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.326 14:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.326 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.327 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.587 nvme0n1 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.587 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:58.588 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:58.588 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:58.588 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.588 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.588 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:58.588 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.588 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:58.588 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:58.588 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:58.588 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:58.588 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.588 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.848 nvme0n1 00:26:59.108 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.108 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.108 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.108 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.109 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.369 nvme0n1 00:26:59.369 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.369 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.369 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.369 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.369 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.369 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.369 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.369 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.369 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.369 14:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.369 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.369 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.369 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.370 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.630 nvme0n1 00:26:59.630 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.630 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.630 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.630 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.630 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.630 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.630 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.630 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.630 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.630 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.890 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.150 nvme0n1 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.150 14:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.410 nvme0n1 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.410 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.411 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.981 nvme0n1 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.981 14:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.551 nvme0n1 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.551 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.552 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:01.552 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.552 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:01.552 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:01.552 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:01.552 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.552 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.552 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.122 nvme0n1 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:02.122 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.123 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:02.123 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:02.123 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:02.123 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:02.123 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.123 14:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.693 nvme0n1 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.693 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.263 nvme0n1 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.263 14:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.833 nvme0n1 00:27:03.833 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.833 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.833 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.833 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.833 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.833 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.093 14:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.664 nvme0n1 00:27:04.664 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.664 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.664 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.664 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.664 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.664 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:04.924 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.925 14:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.496 nvme0n1 00:27:05.496 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.757 14:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.329 nvme0n1 00:27:06.329 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.329 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.589 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.160 nvme0n1 00:27:07.160 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:07.420 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.421 14:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.421 nvme0n1 00:27:07.421 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.421 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.421 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.421 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.421 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.421 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:07.681 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.682 nvme0n1 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.682 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.943 nvme0n1 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.943 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:07.944 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:27:07.944 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:07.944 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:07.944 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.944 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.944 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:07.944 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:07.944 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.944 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:07.944 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.944 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.204 nvme0n1 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.204 14:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.465 nvme0n1 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.465 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.727 nvme0n1 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.727 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.988 nvme0n1 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:08.988 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.989 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.250 nvme0n1 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.250 14:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.512 nvme0n1 00:27:09.512 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.512 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.512 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.512 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.512 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.512 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.512 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.512 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.512 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.512 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.512 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.512 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.512 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:09.512 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.513 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.774 nvme0n1 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.774 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.035 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.296 nvme0n1 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.296 14:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.557 nvme0n1 00:27:10.557 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.557 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.558 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.819 nvme0n1 00:27:10.819 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.819 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.819 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.819 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.819 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.819 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.080 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.081 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.342 nvme0n1 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.342 14:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.603 nvme0n1 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.603 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.604 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.175 nvme0n1 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.175 14:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.748 nvme0n1 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.748 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.320 nvme0n1 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.320 14:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.891 nvme0n1 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.891 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.463 nvme0n1 00:27:14.463 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.463 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.463 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.463 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.463 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.463 14:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.463 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.405 nvme0n1 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.405 14:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.975 nvme0n1 00:27:15.975 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.975 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.975 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.975 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.975 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.975 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.975 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.975 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.975 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.975 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.975 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:16.235 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:16.236 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.236 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.236 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:16.236 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.236 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:16.236 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:16.236 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:16.236 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:16.236 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.236 14:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.805 nvme0n1 00:27:16.805 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.805 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.805 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.805 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.805 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.805 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.805 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.805 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.805 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.805 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.065 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.065 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.065 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:17.065 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.066 14:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.635 nvme0n1 00:27:17.635 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.635 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.635 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.635 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.635 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.635 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.635 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.635 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.635 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.635 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.895 14:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.465 nvme0n1 00:27:18.465 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.465 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.465 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.465 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.465 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.465 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.724 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.724 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.724 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.724 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.724 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.724 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:18.724 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.724 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.724 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:18.724 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.725 nvme0n1 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.725 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.985 nvme0n1 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.985 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.986 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.246 nvme0n1 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.246 14:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.508 nvme0n1 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.508 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.769 nvme0n1 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.769 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.031 nvme0n1 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.031 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.291 nvme0n1 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.292 14:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.552 nvme0n1 00:27:20.552 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.552 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.552 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.552 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.552 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.552 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.552 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.552 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.553 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.813 nvme0n1 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.813 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.073 nvme0n1 00:27:21.073 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.073 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.073 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.074 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.334 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.334 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.334 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.334 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.334 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.334 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.334 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.334 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.334 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.334 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.334 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:21.334 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.334 14:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.595 nvme0n1 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.595 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.856 nvme0n1 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.856 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.117 nvme0n1 00:27:22.117 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.117 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.117 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.117 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.117 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.117 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.117 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.117 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.117 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.117 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.377 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.377 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.377 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:22.377 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.377 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.377 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.377 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.377 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:22.377 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:22.377 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.378 14:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.639 nvme0n1 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.639 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.899 nvme0n1 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.899 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.900 14:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.470 nvme0n1 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:23.470 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.471 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.041 nvme0n1 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.041 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.042 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.042 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.042 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.042 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.042 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.042 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.042 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.042 14:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.613 nvme0n1 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.613 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.184 nvme0n1 00:27:25.184 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.184 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.184 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.184 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.184 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.184 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.184 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.184 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.185 14:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.756 nvme0n1 00:27:25.756 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.756 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.756 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.756 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.756 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.756 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyZjRhYjJkOTUzNDgxZDhmZTI5YjhkYjI1NWFhZjKbfehk: 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: ]] 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMDk4ODY1M2ZjNDlkNDUxMWUxMGU4Njk4ODIyNDY0MTY0MTVkMjhlZWExZGU5NjU5YzQ2ZmYyNzg5ODI3MOE94dY=: 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.757 14:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.699 nvme0n1 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.699 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.270 nvme0n1 00:27:27.270 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.270 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.270 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.270 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.270 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.270 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.270 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.270 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.270 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.270 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.270 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.270 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.270 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:27.531 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.531 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.531 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.531 14:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.531 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.102 nvme0n1 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.102 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDNhNTNhODA0NzhhYjk1NDU2MmYzZDRmNzcyNWM3ZjRkMTE2YzQ3OWI3MmYxOTgyYJgUHg==: 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: ]] 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmMyNjBjZTY1Y2E0YmZmNzdiM2E3MGNhMjM1MjBkOGbfXuvP: 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.103 14:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.046 nvme0n1 00:27:29.046 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.046 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.046 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.046 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.046 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.046 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.046 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.046 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.046 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.046 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.046 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.046 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.046 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmVkMzAxMDhjYjNmNjFjOTdjNmVlNDdmNzBmZjNhODQyZGIzYmE5NDA3ZGQ2M2Q1NjRhNzAwMDEzY2EyOTcwZj32zh8=: 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.047 14:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.990 nvme0n1 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:29.990 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.991 request: 00:27:29.991 { 00:27:29.991 "name": "nvme0", 00:27:29.991 "trtype": "tcp", 00:27:29.991 "traddr": "10.0.0.1", 00:27:29.991 "adrfam": "ipv4", 00:27:29.991 "trsvcid": "4420", 00:27:29.991 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:29.991 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:29.991 "prchk_reftag": false, 00:27:29.991 "prchk_guard": false, 00:27:29.991 "hdgst": false, 00:27:29.991 "ddgst": false, 00:27:29.991 "allow_unrecognized_csi": false, 00:27:29.991 "method": "bdev_nvme_attach_controller", 00:27:29.991 "req_id": 1 00:27:29.991 } 00:27:29.991 Got JSON-RPC error response 00:27:29.991 response: 00:27:29.991 { 00:27:29.991 "code": -5, 00:27:29.991 "message": "Input/output error" 00:27:29.991 } 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.991 request: 00:27:29.991 { 00:27:29.991 "name": "nvme0", 00:27:29.991 "trtype": "tcp", 00:27:29.991 "traddr": "10.0.0.1", 00:27:29.991 "adrfam": "ipv4", 00:27:29.991 "trsvcid": "4420", 00:27:29.991 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:29.991 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:29.991 "prchk_reftag": false, 00:27:29.991 "prchk_guard": false, 00:27:29.991 "hdgst": false, 00:27:29.991 "ddgst": false, 00:27:29.991 "dhchap_key": "key2", 00:27:29.991 "allow_unrecognized_csi": false, 00:27:29.991 "method": "bdev_nvme_attach_controller", 00:27:29.991 "req_id": 1 00:27:29.991 } 00:27:29.991 Got JSON-RPC error response 00:27:29.991 response: 00:27:29.991 { 00:27:29.991 "code": -5, 00:27:29.991 "message": "Input/output error" 00:27:29.991 } 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.991 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.992 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.253 request: 00:27:30.253 { 00:27:30.253 "name": "nvme0", 00:27:30.253 "trtype": "tcp", 00:27:30.253 "traddr": "10.0.0.1", 00:27:30.253 "adrfam": "ipv4", 00:27:30.253 "trsvcid": "4420", 00:27:30.253 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:30.253 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:30.253 "prchk_reftag": false, 00:27:30.253 "prchk_guard": false, 00:27:30.253 "hdgst": false, 00:27:30.253 "ddgst": false, 00:27:30.253 "dhchap_key": "key1", 00:27:30.253 "dhchap_ctrlr_key": "ckey2", 00:27:30.253 "allow_unrecognized_csi": false, 00:27:30.253 "method": "bdev_nvme_attach_controller", 00:27:30.253 "req_id": 1 00:27:30.253 } 00:27:30.253 Got JSON-RPC error response 00:27:30.253 response: 00:27:30.253 { 00:27:30.253 "code": -5, 00:27:30.253 "message": "Input/output error" 00:27:30.253 } 00:27:30.253 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:30.253 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:30.253 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:30.253 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:30.253 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:30.253 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:30.253 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:30.253 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.254 nvme0n1 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.254 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.515 14:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.515 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.515 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:30.515 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:30.515 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:30.515 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:30.515 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.515 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:30.515 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:30.515 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:30.515 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.515 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.515 request: 00:27:30.515 { 00:27:30.515 "name": "nvme0", 00:27:30.515 "dhchap_key": "key1", 00:27:30.515 "dhchap_ctrlr_key": "ckey2", 00:27:30.515 "method": "bdev_nvme_set_keys", 00:27:30.515 "req_id": 1 00:27:30.515 } 00:27:30.515 Got JSON-RPC error response 00:27:30.515 response: 00:27:30.516 { 00:27:30.516 "code": -13, 00:27:30.516 "message": "Permission denied" 00:27:30.516 } 00:27:30.516 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:30.516 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:30.516 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:30.516 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:30.516 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:30.516 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.516 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:30.516 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.516 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.516 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.516 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:30.516 14:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:31.458 14:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.458 14:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:31.458 14:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.458 14:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.458 14:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.718 14:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:31.718 14:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmM5NmM1OWE4OGIwMjkxMWZjNGZhOTEyZTM1ZmZjMmMwNmMzMDEwOTk2YmFlYjY1mS0ZMA==: 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: ]] 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODc0MzhjMTZhY2UwYTU2N2E1MDY2NDE0YmJjM2Q2OWFmMTkyOGRjYzAzYTFiN2ViZQpjDA==: 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.659 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.919 nvme0n1 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmE5NmE3MmQ2MDNlNjAzODEyOWI1YzU1Mzc3MTdmY2McDOqQ: 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: ]] 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjU1NTQ4NDZkOWZkMzJlMmY3NjI5ODZmYWY1ZGFiMWH/l5s/: 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.919 request: 00:27:32.919 { 00:27:32.919 "name": "nvme0", 00:27:32.919 "dhchap_key": "key2", 00:27:32.919 "dhchap_ctrlr_key": "ckey1", 00:27:32.919 "method": "bdev_nvme_set_keys", 00:27:32.919 "req_id": 1 00:27:32.919 } 00:27:32.919 Got JSON-RPC error response 00:27:32.919 response: 00:27:32.919 { 00:27:32.919 "code": -13, 00:27:32.919 "message": "Permission denied" 00:27:32.919 } 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:32.919 14:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:33.859 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.859 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:33.860 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.860 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.860 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.860 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:33.860 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:33.860 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:33.860 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:33.860 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:33.860 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:33.860 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:33.860 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:33.860 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:33.860 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:34.120 rmmod nvme_tcp 00:27:34.120 rmmod nvme_fabrics 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 3541051 ']' 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 3541051 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3541051 ']' 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3541051 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3541051 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3541051' 00:27:34.120 killing process with pid 3541051 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3541051 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3541051 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:34.120 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:27:34.121 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:34.121 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:27:34.121 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:34.121 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:34.121 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.121 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.121 14:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.872 14:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:36.872 14:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:36.872 14:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:36.872 14:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:36.872 14:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:36.872 14:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:27:36.872 14:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:36.872 14:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:36.872 14:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:36.872 14:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:36.872 14:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:36.872 14:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:36.872 14:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:40.177 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:40.177 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:40.177 14:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Yw4 /tmp/spdk.key-null.zwC /tmp/spdk.key-sha256.ODs /tmp/spdk.key-sha384.ObW /tmp/spdk.key-sha512.LWA /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:40.177 14:41:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:43.482 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:43.482 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:43.482 00:27:43.482 real 1m2.910s 00:27:43.482 user 0m56.864s 00:27:43.482 sys 0m15.560s 00:27:43.482 14:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:43.482 14:41:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.482 ************************************ 00:27:43.482 END TEST nvmf_auth_host 00:27:43.482 ************************************ 00:27:43.482 14:41:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:43.482 14:41:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:43.482 14:41:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:43.482 14:41:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:43.482 14:41:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.482 ************************************ 00:27:43.482 START TEST nvmf_digest 00:27:43.482 ************************************ 00:27:43.482 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:43.744 * Looking for test storage... 00:27:43.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:43.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.744 --rc genhtml_branch_coverage=1 00:27:43.744 --rc genhtml_function_coverage=1 00:27:43.744 --rc genhtml_legend=1 00:27:43.744 --rc geninfo_all_blocks=1 00:27:43.744 --rc geninfo_unexecuted_blocks=1 00:27:43.744 00:27:43.744 ' 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:43.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.744 --rc genhtml_branch_coverage=1 00:27:43.744 --rc genhtml_function_coverage=1 00:27:43.744 --rc genhtml_legend=1 00:27:43.744 --rc geninfo_all_blocks=1 00:27:43.744 --rc geninfo_unexecuted_blocks=1 00:27:43.744 00:27:43.744 ' 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:43.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.744 --rc genhtml_branch_coverage=1 00:27:43.744 --rc genhtml_function_coverage=1 00:27:43.744 --rc genhtml_legend=1 00:27:43.744 --rc geninfo_all_blocks=1 00:27:43.744 --rc geninfo_unexecuted_blocks=1 00:27:43.744 00:27:43.744 ' 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:43.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.744 --rc genhtml_branch_coverage=1 00:27:43.744 --rc genhtml_function_coverage=1 00:27:43.744 --rc genhtml_legend=1 00:27:43.744 --rc geninfo_all_blocks=1 00:27:43.744 --rc geninfo_unexecuted_blocks=1 00:27:43.744 00:27:43.744 ' 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.744 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:43.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:43.745 14:41:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.891 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:51.892 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:51.892 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:51.892 Found net devices under 0000:31:00.0: cvl_0_0 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:51.892 Found net devices under 0000:31:00.1: cvl_0_1 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:51.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:27:51.892 00:27:51.892 --- 10.0.0.2 ping statistics --- 00:27:51.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.892 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:27:51.892 00:27:51.892 --- 10.0.0.1 ping statistics --- 00:27:51.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.892 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:51.892 ************************************ 00:27:51.892 START TEST nvmf_digest_clean 00:27:51.892 ************************************ 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=3558625 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 3558625 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3558625 ']' 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:51.892 14:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:51.893 [2024-10-14 14:41:32.052112] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:27:51.893 [2024-10-14 14:41:32.052169] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.893 [2024-10-14 14:41:32.123988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.893 [2024-10-14 14:41:32.166922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.893 [2024-10-14 14:41:32.166957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.893 [2024-10-14 14:41:32.166965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.893 [2024-10-14 14:41:32.166972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.893 [2024-10-14 14:41:32.166977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.893 [2024-10-14 14:41:32.167621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.154 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:52.154 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:52.154 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:52.154 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:52.154 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.154 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.154 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:52.154 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:52.154 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:52.154 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.154 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.415 null0 00:27:52.415 [2024-10-14 14:41:32.943691] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.415 [2024-10-14 14:41:32.967887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3558904 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3558904 /var/tmp/bperf.sock 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3558904 ']' 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:52.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:52.415 14:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.415 [2024-10-14 14:41:33.036457] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:27:52.415 [2024-10-14 14:41:33.036504] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3558904 ] 00:27:52.415 [2024-10-14 14:41:33.114361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.676 [2024-10-14 14:41:33.150990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.249 14:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:53.249 14:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:53.249 14:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:53.249 14:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:53.249 14:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:53.510 14:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.510 14:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.510 nvme0n1 00:27:53.772 14:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:53.772 14:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:53.772 Running I/O for 2 seconds... 00:27:55.657 19656.00 IOPS, 76.78 MiB/s [2024-10-14T12:41:36.645Z] 19727.50 IOPS, 77.06 MiB/s 00:27:55.918 Latency(us) 00:27:55.918 [2024-10-14T12:41:36.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.918 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:55.918 nvme0n1 : 2.05 19344.33 75.56 0.00 0.00 6478.24 2717.01 45875.20 00:27:55.918 [2024-10-14T12:41:36.645Z] =================================================================================================================== 00:27:55.918 [2024-10-14T12:41:36.645Z] Total : 19344.33 75.56 0.00 0.00 6478.24 2717.01 45875.20 00:27:55.918 { 00:27:55.918 "results": [ 00:27:55.918 { 00:27:55.918 "job": "nvme0n1", 00:27:55.918 "core_mask": "0x2", 00:27:55.918 "workload": "randread", 00:27:55.918 "status": "finished", 00:27:55.918 "queue_depth": 128, 00:27:55.918 "io_size": 4096, 00:27:55.918 "runtime": 2.046233, 00:27:55.918 "iops": 19344.326867956876, 00:27:55.918 "mibps": 75.56377682795654, 00:27:55.918 "io_failed": 0, 00:27:55.918 "io_timeout": 0, 00:27:55.918 "avg_latency_us": 6478.24452045912, 00:27:55.918 "min_latency_us": 2717.0133333333333, 00:27:55.918 "max_latency_us": 45875.2 00:27:55.918 } 00:27:55.918 ], 00:27:55.918 "core_count": 1 00:27:55.918 } 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:55.918 | select(.opcode=="crc32c") 00:27:55.918 | "\(.module_name) \(.executed)"' 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3558904 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3558904 ']' 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3558904 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:55.918 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3558904 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3558904' 00:27:56.180 killing process with pid 3558904 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3558904 00:27:56.180 Received shutdown signal, test time was about 2.000000 seconds 00:27:56.180 00:27:56.180 Latency(us) 00:27:56.180 [2024-10-14T12:41:36.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.180 [2024-10-14T12:41:36.907Z] =================================================================================================================== 00:27:56.180 [2024-10-14T12:41:36.907Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3558904 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3559658 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3559658 /var/tmp/bperf.sock 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3559658 ']' 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:56.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:56.180 14:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:56.180 [2024-10-14 14:41:36.819427] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:27:56.180 [2024-10-14 14:41:36.819482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3559658 ] 00:27:56.180 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:56.180 Zero copy mechanism will not be used. 00:27:56.180 [2024-10-14 14:41:36.896722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.441 [2024-10-14 14:41:36.925911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.012 14:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:57.012 14:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:57.012 14:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:57.012 14:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:57.012 14:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:57.273 14:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.273 14:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.534 nvme0n1 00:27:57.534 14:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:57.534 14:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:57.534 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:57.534 Zero copy mechanism will not be used. 00:27:57.534 Running I/O for 2 seconds... 00:27:59.861 3341.00 IOPS, 417.62 MiB/s [2024-10-14T12:41:40.588Z] 3442.50 IOPS, 430.31 MiB/s 00:27:59.861 Latency(us) 00:27:59.861 [2024-10-14T12:41:40.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.861 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:59.861 nvme0n1 : 2.00 3445.73 430.72 0.00 0.00 4640.93 665.60 7427.41 00:27:59.861 [2024-10-14T12:41:40.588Z] =================================================================================================================== 00:27:59.861 [2024-10-14T12:41:40.588Z] Total : 3445.73 430.72 0.00 0.00 4640.93 665.60 7427.41 00:27:59.861 { 00:27:59.861 "results": [ 00:27:59.861 { 00:27:59.861 "job": "nvme0n1", 00:27:59.861 "core_mask": "0x2", 00:27:59.861 "workload": "randread", 00:27:59.861 "status": "finished", 00:27:59.861 "queue_depth": 16, 00:27:59.861 "io_size": 131072, 00:27:59.861 "runtime": 2.002766, 00:27:59.861 "iops": 3445.734549118569, 00:27:59.861 "mibps": 430.7168186398211, 00:27:59.861 "io_failed": 0, 00:27:59.861 "io_timeout": 0, 00:27:59.861 "avg_latency_us": 4640.927216345457, 00:27:59.861 "min_latency_us": 665.6, 00:27:59.861 "max_latency_us": 7427.413333333333 00:27:59.861 } 00:27:59.861 ], 00:27:59.861 "core_count": 1 00:27:59.861 } 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:59.861 | select(.opcode=="crc32c") 00:27:59.861 | "\(.module_name) \(.executed)"' 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3559658 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3559658 ']' 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3559658 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3559658 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:59.861 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:59.862 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3559658' 00:27:59.862 killing process with pid 3559658 00:27:59.862 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3559658 00:27:59.862 Received shutdown signal, test time was about 2.000000 seconds 00:27:59.862 00:27:59.862 Latency(us) 00:27:59.862 [2024-10-14T12:41:40.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.862 [2024-10-14T12:41:40.589Z] =================================================================================================================== 00:27:59.862 [2024-10-14T12:41:40.589Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:59.862 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3559658 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3560338 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3560338 /var/tmp/bperf.sock 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3560338 ']' 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:00.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:00.123 14:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:00.123 [2024-10-14 14:41:40.654991] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:28:00.123 [2024-10-14 14:41:40.655050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3560338 ] 00:28:00.123 [2024-10-14 14:41:40.733459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.123 [2024-10-14 14:41:40.763809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.065 14:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:01.065 14:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:01.065 14:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:01.065 14:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:01.065 14:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:01.065 14:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.065 14:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:01.325 nvme0n1 00:28:01.325 14:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:01.325 14:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:01.325 Running I/O for 2 seconds... 00:28:03.653 21655.00 IOPS, 84.59 MiB/s [2024-10-14T12:41:44.380Z] 21676.00 IOPS, 84.67 MiB/s 00:28:03.653 Latency(us) 00:28:03.653 [2024-10-14T12:41:44.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.653 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:03.653 nvme0n1 : 2.01 21694.11 84.74 0.00 0.00 5891.89 1966.08 10485.76 00:28:03.653 [2024-10-14T12:41:44.380Z] =================================================================================================================== 00:28:03.653 [2024-10-14T12:41:44.380Z] Total : 21694.11 84.74 0.00 0.00 5891.89 1966.08 10485.76 00:28:03.653 { 00:28:03.653 "results": [ 00:28:03.653 { 00:28:03.653 "job": "nvme0n1", 00:28:03.653 "core_mask": "0x2", 00:28:03.653 "workload": "randwrite", 00:28:03.653 "status": "finished", 00:28:03.653 "queue_depth": 128, 00:28:03.653 "io_size": 4096, 00:28:03.653 "runtime": 2.007181, 00:28:03.653 "iops": 21694.10730771166, 00:28:03.653 "mibps": 84.74260667074867, 00:28:03.653 "io_failed": 0, 00:28:03.653 "io_timeout": 0, 00:28:03.653 "avg_latency_us": 5891.893068773348, 00:28:03.653 "min_latency_us": 1966.08, 00:28:03.653 "max_latency_us": 10485.76 00:28:03.653 } 00:28:03.653 ], 00:28:03.653 "core_count": 1 00:28:03.653 } 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:03.653 | select(.opcode=="crc32c") 00:28:03.653 | "\(.module_name) \(.executed)"' 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3560338 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3560338 ']' 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3560338 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3560338 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3560338' 00:28:03.653 killing process with pid 3560338 00:28:03.653 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3560338 00:28:03.653 Received shutdown signal, test time was about 2.000000 seconds 00:28:03.653 00:28:03.653 Latency(us) 00:28:03.653 [2024-10-14T12:41:44.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.653 [2024-10-14T12:41:44.380Z] =================================================================================================================== 00:28:03.653 [2024-10-14T12:41:44.381Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3560338 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3561024 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3561024 /var/tmp/bperf.sock 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3561024 ']' 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:03.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:03.654 14:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.915 [2024-10-14 14:41:44.427229] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:28:03.915 [2024-10-14 14:41:44.427284] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3561024 ] 00:28:03.915 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:03.915 Zero copy mechanism will not be used. 00:28:03.915 [2024-10-14 14:41:44.503575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.915 [2024-10-14 14:41:44.532887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.857 14:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:04.857 14:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:04.857 14:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:04.857 14:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:04.857 14:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:04.857 14:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:04.857 14:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:05.118 nvme0n1 00:28:05.378 14:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:05.378 14:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:05.378 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:05.378 Zero copy mechanism will not be used. 00:28:05.378 Running I/O for 2 seconds... 00:28:07.262 3277.00 IOPS, 409.62 MiB/s [2024-10-14T12:41:47.989Z] 3713.00 IOPS, 464.12 MiB/s 00:28:07.262 Latency(us) 00:28:07.262 [2024-10-14T12:41:47.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.262 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:07.262 nvme0n1 : 2.00 3711.89 463.99 0.00 0.00 4303.51 1645.23 6526.29 00:28:07.262 [2024-10-14T12:41:47.989Z] =================================================================================================================== 00:28:07.262 [2024-10-14T12:41:47.989Z] Total : 3711.89 463.99 0.00 0.00 4303.51 1645.23 6526.29 00:28:07.262 { 00:28:07.262 "results": [ 00:28:07.262 { 00:28:07.262 "job": "nvme0n1", 00:28:07.262 "core_mask": "0x2", 00:28:07.262 "workload": "randwrite", 00:28:07.262 "status": "finished", 00:28:07.262 "queue_depth": 16, 00:28:07.262 "io_size": 131072, 00:28:07.262 "runtime": 2.004906, 00:28:07.262 "iops": 3711.89472224633, 00:28:07.262 "mibps": 463.9868402807912, 00:28:07.262 "io_failed": 0, 00:28:07.262 "io_timeout": 0, 00:28:07.262 "avg_latency_us": 4303.514338439487, 00:28:07.262 "min_latency_us": 1645.2266666666667, 00:28:07.262 "max_latency_us": 6526.293333333333 00:28:07.262 } 00:28:07.262 ], 00:28:07.262 "core_count": 1 00:28:07.262 } 00:28:07.262 14:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:07.262 14:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:07.262 14:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:07.262 14:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:07.262 | select(.opcode=="crc32c") 00:28:07.262 | "\(.module_name) \(.executed)"' 00:28:07.263 14:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3561024 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3561024 ']' 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3561024 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3561024 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3561024' 00:28:07.523 killing process with pid 3561024 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3561024 00:28:07.523 Received shutdown signal, test time was about 2.000000 seconds 00:28:07.523 00:28:07.523 Latency(us) 00:28:07.523 [2024-10-14T12:41:48.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.523 [2024-10-14T12:41:48.250Z] =================================================================================================================== 00:28:07.523 [2024-10-14T12:41:48.250Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:07.523 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3561024 00:28:07.784 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3558625 00:28:07.784 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3558625 ']' 00:28:07.784 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3558625 00:28:07.784 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:07.784 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:07.784 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3558625 00:28:07.784 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:07.784 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:07.784 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3558625' 00:28:07.784 killing process with pid 3558625 00:28:07.784 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3558625 00:28:07.784 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3558625 00:28:07.784 00:28:07.784 real 0m16.508s 00:28:07.784 user 0m32.820s 00:28:07.784 sys 0m3.418s 00:28:07.784 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:07.784 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:07.784 ************************************ 00:28:07.784 END TEST nvmf_digest_clean 00:28:07.784 ************************************ 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:08.046 ************************************ 00:28:08.046 START TEST nvmf_digest_error 00:28:08.046 ************************************ 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=3561857 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 3561857 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3561857 ']' 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.046 [2024-10-14 14:41:48.623820] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:28:08.046 [2024-10-14 14:41:48.623872] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.046 [2024-10-14 14:41:48.692952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.046 [2024-10-14 14:41:48.729430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.046 [2024-10-14 14:41:48.729463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.046 [2024-10-14 14:41:48.729475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.046 [2024-10-14 14:41:48.729482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.046 [2024-10-14 14:41:48.729487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.046 [2024-10-14 14:41:48.730084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:08.046 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.307 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.307 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:08.307 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.307 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.307 [2024-10-14 14:41:48.794520] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:08.307 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.307 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.308 null0 00:28:08.308 [2024-10-14 14:41:48.876358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.308 [2024-10-14 14:41:48.900563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3562026 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3562026 /var/tmp/bperf.sock 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3562026 ']' 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:08.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:08.308 14:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.308 [2024-10-14 14:41:48.955460] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:28:08.308 [2024-10-14 14:41:48.955507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3562026 ] 00:28:08.308 [2024-10-14 14:41:49.030515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.569 [2024-10-14 14:41:49.060585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.569 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:08.569 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:08.569 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:08.570 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:08.830 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:08.830 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.830 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:08.830 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.830 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:08.830 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:09.091 nvme0n1 00:28:09.091 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:09.091 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.091 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.091 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.091 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:09.091 14:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:09.091 Running I/O for 2 seconds... 00:28:09.353 [2024-10-14 14:41:49.848397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.353 [2024-10-14 14:41:49.848426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.353 [2024-10-14 14:41:49.848435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.353 [2024-10-14 14:41:49.860148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.353 [2024-10-14 14:41:49.860168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.353 [2024-10-14 14:41:49.860176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.353 [2024-10-14 14:41:49.872284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.353 [2024-10-14 14:41:49.872301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.353 [2024-10-14 14:41:49.872308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.353 [2024-10-14 14:41:49.885149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.353 [2024-10-14 14:41:49.885167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.353 [2024-10-14 14:41:49.885174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.353 [2024-10-14 14:41:49.898497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.353 [2024-10-14 14:41:49.898515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.353 [2024-10-14 14:41:49.898522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.353 [2024-10-14 14:41:49.910603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.353 [2024-10-14 14:41:49.910621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.353 [2024-10-14 14:41:49.910628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.353 [2024-10-14 14:41:49.924056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.354 [2024-10-14 14:41:49.924075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.354 [2024-10-14 14:41:49.924082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.354 [2024-10-14 14:41:49.935877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.354 [2024-10-14 14:41:49.935895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.354 [2024-10-14 14:41:49.935902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.354 [2024-10-14 14:41:49.947362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.354 [2024-10-14 14:41:49.947378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.354 [2024-10-14 14:41:49.947385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.354 [2024-10-14 14:41:49.959942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.354 [2024-10-14 14:41:49.959960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.354 [2024-10-14 14:41:49.959967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.354 [2024-10-14 14:41:49.972354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.354 [2024-10-14 14:41:49.972373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.354 [2024-10-14 14:41:49.972387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.354 [2024-10-14 14:41:49.984665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.354 [2024-10-14 14:41:49.984682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.354 [2024-10-14 14:41:49.984689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.354 [2024-10-14 14:41:49.996530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.354 [2024-10-14 14:41:49.996547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.354 [2024-10-14 14:41:49.996554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.354 [2024-10-14 14:41:50.011603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.354 [2024-10-14 14:41:50.011623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.354 [2024-10-14 14:41:50.011629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.354 [2024-10-14 14:41:50.026799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.354 [2024-10-14 14:41:50.026818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.354 [2024-10-14 14:41:50.026825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.354 [2024-10-14 14:41:50.039946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.354 [2024-10-14 14:41:50.039964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.354 [2024-10-14 14:41:50.039971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.354 [2024-10-14 14:41:50.049784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.354 [2024-10-14 14:41:50.049803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.354 [2024-10-14 14:41:50.049811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.354 [2024-10-14 14:41:50.062989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.354 [2024-10-14 14:41:50.063008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.354 [2024-10-14 14:41:50.063015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.354 [2024-10-14 14:41:50.076252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.354 [2024-10-14 14:41:50.076271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.354 [2024-10-14 14:41:50.076278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.089084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.089102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.089109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.101889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.101907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.101914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.113764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.113781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.113788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.125535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.125553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.125560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.138943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.138961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.138967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.150902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.150919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.150926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.162097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.162117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.162125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.176424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.176443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.176450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.189235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.189254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.189264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.202059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.202081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.202087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.212889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.212907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.212914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.226258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.226277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.226283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.239199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.239217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.239224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.251074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.251092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.251099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.262632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.262650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.616 [2024-10-14 14:41:50.262657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.616 [2024-10-14 14:41:50.276022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.616 [2024-10-14 14:41:50.276040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.617 [2024-10-14 14:41:50.276047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.617 [2024-10-14 14:41:50.289143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.617 [2024-10-14 14:41:50.289162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.617 [2024-10-14 14:41:50.289170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.617 [2024-10-14 14:41:50.299570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.617 [2024-10-14 14:41:50.299592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.617 [2024-10-14 14:41:50.299598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.617 [2024-10-14 14:41:50.313205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.617 [2024-10-14 14:41:50.313224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.617 [2024-10-14 14:41:50.313230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.617 [2024-10-14 14:41:50.326653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.617 [2024-10-14 14:41:50.326671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.617 [2024-10-14 14:41:50.326678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.617 [2024-10-14 14:41:50.339851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.617 [2024-10-14 14:41:50.339869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.617 [2024-10-14 14:41:50.339876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.353092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.353110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.353116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.363620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.363639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.363646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.375888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.375907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.375914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.388624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.388642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.388649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.402017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.402036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.402043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.414588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.414606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.414613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.428196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.428214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.428221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.439662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.439679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.439687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.450186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.450203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.450210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.464209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.464227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.464234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.477095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.477112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.477119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.488562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.488580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.488587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.500750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.500767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.500774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.513166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.513184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.513194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.526706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.526725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.526733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.539846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.539864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.539870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.551767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.551784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.551791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.563782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.563801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.563807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.576476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.576494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.576501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.588960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.588978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.588985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.879 [2024-10-14 14:41:50.601193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:09.879 [2024-10-14 14:41:50.601211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.879 [2024-10-14 14:41:50.601218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.614226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.614244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.614251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.625960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.625981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.625988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.639485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.639503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.639510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.652306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.652324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.652331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.662626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.662645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.662651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.675968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.675986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.675992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.690020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.690038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.690045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.700613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.700630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.700637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.712857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.712875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.712882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.725548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.725566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.725575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.738658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.738676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.738683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.751044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.751067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.751074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.766276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.766294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.766301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.778747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.778764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.778771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.788771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.788790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.788796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.803374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.803392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.803399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.815392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.815409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.815415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 20047.00 IOPS, 78.31 MiB/s [2024-10-14T12:41:50.868Z] [2024-10-14 14:41:50.827009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.827026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.827033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.839227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.839248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.839255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.852700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.852718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.141 [2024-10-14 14:41:50.852725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.141 [2024-10-14 14:41:50.867195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.141 [2024-10-14 14:41:50.867213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.142 [2024-10-14 14:41:50.867220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:50.878432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:50.878450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:50.878457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:50.891376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:50.891394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:50.891400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:50.903145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:50.903163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:50.903169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:50.916744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:50.916762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:50.916768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:50.929439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:50.929457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:50.929463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:50.941983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:50.942000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:50.942007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:50.952049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:50.952071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:50.952078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:50.965153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:50.965171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:50.965177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:50.979087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:50.979105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:50.979111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:50.992095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:50.992113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:50.992119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:51.004760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:51.004778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:51.004785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:51.017376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:51.017395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:51.017401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:51.029527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:51.029544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:51.029551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:51.042505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:51.042522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:51.042529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:51.054395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:51.054412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:51.054421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:51.065412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:51.065430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:51.065436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:51.078337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:51.078354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:51.078361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:51.092560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:51.092577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:51.092584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:51.104792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:51.104810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:51.104816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:51.117538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:51.117555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:51.117562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.404 [2024-10-14 14:41:51.129402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.404 [2024-10-14 14:41:51.129420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.404 [2024-10-14 14:41:51.129427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.666 [2024-10-14 14:41:51.143716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.666 [2024-10-14 14:41:51.143734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.666 [2024-10-14 14:41:51.143741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.666 [2024-10-14 14:41:51.157035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.666 [2024-10-14 14:41:51.157052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.666 [2024-10-14 14:41:51.157059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.666 [2024-10-14 14:41:51.170090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.666 [2024-10-14 14:41:51.170108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.666 [2024-10-14 14:41:51.170114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.666 [2024-10-14 14:41:51.182333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.666 [2024-10-14 14:41:51.182352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.666 [2024-10-14 14:41:51.182359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.666 [2024-10-14 14:41:51.193001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.666 [2024-10-14 14:41:51.193018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.193025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.207570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.207588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.207595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.221991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.222008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.222015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.232451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.232468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.232474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.245800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.245817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.245824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.259146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.259164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.259171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.272527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.272545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.272555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.283183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.283200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.283206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.297556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.297573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.297580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.310752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.310768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.310774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.323378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.323395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.323401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.335588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.335605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.335611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.348728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.348745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.348751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.360607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.360624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.360630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.374122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.374139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.374146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.667 [2024-10-14 14:41:51.386091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.667 [2024-10-14 14:41:51.386111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.667 [2024-10-14 14:41:51.386117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.398447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.398464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.398470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.410110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.410127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.410134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.423020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.423037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.423043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.436549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.436565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.436572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.447332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.447348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.447355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.459319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.459336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.459343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.473191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.473208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.473215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.485154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.485171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.485178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.497415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.497432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.497438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.510570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.510587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.510595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.522562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.522579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.522586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.533939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.533956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.533963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.548517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.548534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.548541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.560845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.560862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.560868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.574460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.574477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.574484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.587524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.587540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.587546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.598878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.598894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.598907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.612373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.612390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.612397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.625739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.625756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.625762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.638659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.638675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.638681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:10.929 [2024-10-14 14:41:51.649259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:10.929 [2024-10-14 14:41:51.649276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:10.929 [2024-10-14 14:41:51.649283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.191 [2024-10-14 14:41:51.662174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:11.191 [2024-10-14 14:41:51.662191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.191 [2024-10-14 14:41:51.662198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.191 [2024-10-14 14:41:51.675772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:11.191 [2024-10-14 14:41:51.675789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.191 [2024-10-14 14:41:51.675795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.191 [2024-10-14 14:41:51.688572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:11.192 [2024-10-14 14:41:51.688589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.192 [2024-10-14 14:41:51.688596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.192 [2024-10-14 14:41:51.701238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:11.192 [2024-10-14 14:41:51.701255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.192 [2024-10-14 14:41:51.701262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.192 [2024-10-14 14:41:51.713786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:11.192 [2024-10-14 14:41:51.713807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.192 [2024-10-14 14:41:51.713813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.192 [2024-10-14 14:41:51.726292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:11.192 [2024-10-14 14:41:51.726309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.192 [2024-10-14 14:41:51.726315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.192 [2024-10-14 14:41:51.736942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:11.192 [2024-10-14 14:41:51.736959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.192 [2024-10-14 14:41:51.736965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.192 [2024-10-14 14:41:51.750232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:11.192 [2024-10-14 14:41:51.750248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.192 [2024-10-14 14:41:51.750254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.192 [2024-10-14 14:41:51.762483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:11.192 [2024-10-14 14:41:51.762499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.192 [2024-10-14 14:41:51.762506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.192 [2024-10-14 14:41:51.775997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:11.192 [2024-10-14 14:41:51.776013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.192 [2024-10-14 14:41:51.776020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.192 [2024-10-14 14:41:51.787014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:11.192 [2024-10-14 14:41:51.787030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.192 [2024-10-14 14:41:51.787036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.192 [2024-10-14 14:41:51.798845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:11.192 [2024-10-14 14:41:51.798862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.192 [2024-10-14 14:41:51.798869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.192 [2024-10-14 14:41:51.812089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:11.192 [2024-10-14 14:41:51.812106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.192 [2024-10-14 14:41:51.812116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.192 [2024-10-14 14:41:51.825040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19391b0) 00:28:11.192 [2024-10-14 14:41:51.825057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.192 [2024-10-14 14:41:51.825068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.192 20159.00 IOPS, 78.75 MiB/s 00:28:11.192 Latency(us) 00:28:11.192 [2024-10-14T12:41:51.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.192 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:11.192 nvme0n1 : 2.00 20185.11 78.85 0.00 0.00 6335.93 2225.49 21626.88 00:28:11.192 [2024-10-14T12:41:51.919Z] =================================================================================================================== 00:28:11.192 [2024-10-14T12:41:51.919Z] Total : 20185.11 78.85 0.00 0.00 6335.93 2225.49 21626.88 00:28:11.192 { 00:28:11.192 "results": [ 00:28:11.192 { 00:28:11.192 "job": "nvme0n1", 00:28:11.192 "core_mask": "0x2", 00:28:11.192 "workload": "randread", 00:28:11.192 "status": "finished", 00:28:11.192 "queue_depth": 128, 00:28:11.192 "io_size": 4096, 00:28:11.192 "runtime": 2.003754, 00:28:11.192 "iops": 20185.11254375537, 00:28:11.192 "mibps": 78.84809587404442, 00:28:11.192 "io_failed": 0, 00:28:11.192 "io_timeout": 0, 00:28:11.192 "avg_latency_us": 6335.928983830292, 00:28:11.192 "min_latency_us": 2225.4933333333333, 00:28:11.192 "max_latency_us": 21626.88 00:28:11.192 } 00:28:11.192 ], 00:28:11.192 "core_count": 1 00:28:11.192 } 00:28:11.192 14:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:11.192 14:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:11.192 14:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:11.192 14:41:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:11.192 | .driver_specific 00:28:11.192 | .nvme_error 00:28:11.192 | .status_code 00:28:11.192 | .command_transient_transport_error' 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3562026 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3562026 ']' 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3562026 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3562026 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3562026' 00:28:11.453 killing process with pid 3562026 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3562026 00:28:11.453 Received shutdown signal, test time was about 2.000000 seconds 00:28:11.453 00:28:11.453 Latency(us) 00:28:11.453 [2024-10-14T12:41:52.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.453 [2024-10-14T12:41:52.180Z] =================================================================================================================== 00:28:11.453 [2024-10-14T12:41:52.180Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3562026 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3562588 00:28:11.453 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3562588 /var/tmp/bperf.sock 00:28:11.715 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3562588 ']' 00:28:11.715 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:11.715 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:11.715 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:11.715 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:11.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:11.716 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:11.716 14:41:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.716 [2024-10-14 14:41:52.232485] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:28:11.716 [2024-10-14 14:41:52.232540] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3562588 ] 00:28:11.716 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:11.716 Zero copy mechanism will not be used. 00:28:11.716 [2024-10-14 14:41:52.310611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.716 [2024-10-14 14:41:52.340036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.659 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:12.659 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:12.659 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:12.659 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:12.659 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:12.659 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.659 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.659 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.659 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.659 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.920 nvme0n1 00:28:12.920 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:12.920 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.920 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.920 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.920 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:12.920 14:41:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:13.181 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:13.181 Zero copy mechanism will not be used. 00:28:13.181 Running I/O for 2 seconds... 00:28:13.181 [2024-10-14 14:41:53.695218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.181 [2024-10-14 14:41:53.695249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.695259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.702837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.702858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.702866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.712970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.712990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.712997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.722248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.722267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.722273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.733210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.733229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.733236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.743284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.743302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.743309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.752906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.752928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.752935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.758355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.758373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.758379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.768559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.768577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.768584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.778933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.778952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.778959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.787331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.787349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.787355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.798212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.798229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.798236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.807357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.807375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.807381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.817144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.817163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.817169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.824391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.824410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.824416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.833103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.833122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.833128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.839267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.839285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.839292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.849610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.849628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.849635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.858962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.858980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.858986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.865748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.865766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.865772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.875419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.875438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.875444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.886521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.886539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.886546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.898009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.898028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.898034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.182 [2024-10-14 14:41:53.908412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.182 [2024-10-14 14:41:53.908430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.182 [2024-10-14 14:41:53.908440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:53.913547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:53.913566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:53.913572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:53.919411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:53.919429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:53.919435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:53.927655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:53.927673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:53.927679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:53.934069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:53.934087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:53.934093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:53.944754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:53.944772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:53.944778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:53.955334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:53.955352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:53.955359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:53.965785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:53.965803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:53.965810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:53.976949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:53.976967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:53.976973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:53.986109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:53.986131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:53.986137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:53.997022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:53.997041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:53.997047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:54.002263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:54.002281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:54.002288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:54.010144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:54.010162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:54.010169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:54.015226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:54.015244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:54.015251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:54.022387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:54.022406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:54.022412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:54.030294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:54.030312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:54.030319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:54.036346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:54.036364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:54.036370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:54.043670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:54.043688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:54.043695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:54.049149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:54.049167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:54.049174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:54.058588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:54.058606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:54.058612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:54.063588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:54.063606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:54.063612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:54.072246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:54.072264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:54.072270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:54.083813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:54.083831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:54.083837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:54.089412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.445 [2024-10-14 14:41:54.089430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.445 [2024-10-14 14:41:54.089437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.445 [2024-10-14 14:41:54.096708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.446 [2024-10-14 14:41:54.096727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.446 [2024-10-14 14:41:54.096733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.446 [2024-10-14 14:41:54.106524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.446 [2024-10-14 14:41:54.106543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.446 [2024-10-14 14:41:54.106549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.446 [2024-10-14 14:41:54.116501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.446 [2024-10-14 14:41:54.116519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.446 [2024-10-14 14:41:54.116528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.446 [2024-10-14 14:41:54.127170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.446 [2024-10-14 14:41:54.127188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.446 [2024-10-14 14:41:54.127194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.446 [2024-10-14 14:41:54.136143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.446 [2024-10-14 14:41:54.136162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.446 [2024-10-14 14:41:54.136168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.446 [2024-10-14 14:41:54.143729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.446 [2024-10-14 14:41:54.143747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.446 [2024-10-14 14:41:54.143753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.446 [2024-10-14 14:41:54.151471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.446 [2024-10-14 14:41:54.151489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.446 [2024-10-14 14:41:54.151495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.446 [2024-10-14 14:41:54.158833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.446 [2024-10-14 14:41:54.158851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.446 [2024-10-14 14:41:54.158858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.446 [2024-10-14 14:41:54.165729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.446 [2024-10-14 14:41:54.165747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.446 [2024-10-14 14:41:54.165754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.446 [2024-10-14 14:41:54.173227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.446 [2024-10-14 14:41:54.173246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.446 [2024-10-14 14:41:54.173252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.181231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.181249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.708 [2024-10-14 14:41:54.181256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.191353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.191371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.708 [2024-10-14 14:41:54.191378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.197026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.197045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.708 [2024-10-14 14:41:54.197051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.203621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.203640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.708 [2024-10-14 14:41:54.203646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.212103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.212121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.708 [2024-10-14 14:41:54.212127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.217398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.217417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.708 [2024-10-14 14:41:54.217424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.225351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.225369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.708 [2024-10-14 14:41:54.225375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.236883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.236901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.708 [2024-10-14 14:41:54.236907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.248010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.248029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.708 [2024-10-14 14:41:54.248035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.259321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.259339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.708 [2024-10-14 14:41:54.259349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.265797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.265815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.708 [2024-10-14 14:41:54.265822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.270969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.270987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.708 [2024-10-14 14:41:54.270994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.278023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.278042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.708 [2024-10-14 14:41:54.278048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.287512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.287531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.708 [2024-10-14 14:41:54.287538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.708 [2024-10-14 14:41:54.292460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.708 [2024-10-14 14:41:54.292478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.292484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.299596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.299614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.299620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.308409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.308427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.308433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.317550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.317569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.317575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.326896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.326917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.326924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.337808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.337827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.337833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.345741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.345759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.345766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.352288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.352306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.352312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.361255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.361273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.361279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.371783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.371801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.371807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.377350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.377368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.377374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.383246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.383264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.383270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.394415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.394432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.394439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.400863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.400881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.400887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.407131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.407148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.407154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.417598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.417616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.417622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.425446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.425464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.425470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.709 [2024-10-14 14:41:54.432721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.709 [2024-10-14 14:41:54.432739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.709 [2024-10-14 14:41:54.432746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.970 [2024-10-14 14:41:54.438135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.970 [2024-10-14 14:41:54.438153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.438159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.447330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.447347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.447353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.456864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.456882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.456889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.466866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.466884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.466894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.478401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.478419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.478425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.489325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.489343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.489349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.495929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.495947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.495953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.505609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.505627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.505633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.513652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.513670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.513676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.521328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.521345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.521352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.528903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.528920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.528927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.537767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.537785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.537791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.545957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.545978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.545984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.555525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.555543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.555549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.560632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.560650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.560657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.568714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.568732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.568738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.573872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.573890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.573896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.581690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.581708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.581714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.593185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.593203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.593209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.601841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.601858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.601865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.609732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.609749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.609756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.615876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.615893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.615900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.620193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.620209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.620215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.625482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.625499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.625505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.634415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.634432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.634438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.644550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.644567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.644574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.655217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.655234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.655241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.662349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.662366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.971 [2024-10-14 14:41:54.662373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.971 [2024-10-14 14:41:54.671395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.971 [2024-10-14 14:41:54.671413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.972 [2024-10-14 14:41:54.671419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.972 3674.00 IOPS, 459.25 MiB/s [2024-10-14T12:41:54.699Z] [2024-10-14 14:41:54.682168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.972 [2024-10-14 14:41:54.682185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.972 [2024-10-14 14:41:54.682195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.972 [2024-10-14 14:41:54.693760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:13.972 [2024-10-14 14:41:54.693777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.972 [2024-10-14 14:41:54.693783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.233 [2024-10-14 14:41:54.705152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.233 [2024-10-14 14:41:54.705169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.233 [2024-10-14 14:41:54.705175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.233 [2024-10-14 14:41:54.714648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.233 [2024-10-14 14:41:54.714665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.714672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.721770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.721787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.721793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.732788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.732805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.732811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.744284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.744300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.744307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.754796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.754813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.754820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.763446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.763464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.763470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.773806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.773828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.773834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.786527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.786545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.786552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.798059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.798082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.798089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.808037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.808056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.808067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.813271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.813289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.813295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.821254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.821272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.821279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.828207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.828224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.828231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.836314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.836333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.836339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.845367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.845385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.845392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.853175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.853192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.853199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.858431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.858450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.858456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.867194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.867212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.867218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.875823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.875840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.875846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.884206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.884224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.884231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.895308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.895326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.895332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.902715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.902733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.902739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.911223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.911241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.911248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.921332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.921350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.921359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.930278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.930296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.930303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.941290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.941308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.941314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.234 [2024-10-14 14:41:54.952188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.234 [2024-10-14 14:41:54.952205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.234 [2024-10-14 14:41:54.952212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:54.963939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:54.963957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:54.963963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:54.972961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:54.972979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:54.972985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:54.983086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:54.983103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:54.983109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:54.991877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:54.991895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:54.991901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:54.998634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:54.998652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:54.998658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.007007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.007025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.007031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.019123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.019142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.019148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.030208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.030226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.030232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.041733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.041751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.041758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.054704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.054722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.054728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.061038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.061056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.061067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.068947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.068965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.068971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.074090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.074108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.074114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.081181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.081200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.081209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.092856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.092874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.092880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.102673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.102692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.102699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.111276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.111294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.111300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.116431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.116449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.116455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.124802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.124819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.124825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.133781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.133799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.133805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.143696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.143714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.143720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.154481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.154499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.154505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.163798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.163822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.163828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.168602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.168620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.168626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.178078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.178096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.178102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.187809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.496 [2024-10-14 14:41:55.187827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.496 [2024-10-14 14:41:55.187833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.496 [2024-10-14 14:41:55.196365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.497 [2024-10-14 14:41:55.196383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.497 [2024-10-14 14:41:55.196389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.497 [2024-10-14 14:41:55.203867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.497 [2024-10-14 14:41:55.203885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.497 [2024-10-14 14:41:55.203892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.497 [2024-10-14 14:41:55.211689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.497 [2024-10-14 14:41:55.211707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.497 [2024-10-14 14:41:55.211713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.497 [2024-10-14 14:41:55.216927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.497 [2024-10-14 14:41:55.216946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.497 [2024-10-14 14:41:55.216953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.225726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.225744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.225751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.233736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.233755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.233761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.245188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.245206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.245212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.252970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.252988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.252994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.258530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.258549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.258555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.263876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.263894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.263900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.269206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.269223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.269230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.279472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.279490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.279496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.286887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.286905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.286911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.295136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.295154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.295164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.307343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.307361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.307367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.313273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.313292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.313299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.322038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.322055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.322066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.325181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.325199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.325205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.335772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.335790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.335796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.348149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.348167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.348173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.356652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.356671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.356677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.363258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.363276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.363282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.374558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.374581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.374587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.385043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.385067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.385074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.397078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.397097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.397103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.408252] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.408270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.408277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.419589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.760 [2024-10-14 14:41:55.419608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.760 [2024-10-14 14:41:55.419614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.760 [2024-10-14 14:41:55.430499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.761 [2024-10-14 14:41:55.430517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.761 [2024-10-14 14:41:55.430524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.761 [2024-10-14 14:41:55.441492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.761 [2024-10-14 14:41:55.441510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.761 [2024-10-14 14:41:55.441517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.761 [2024-10-14 14:41:55.449022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.761 [2024-10-14 14:41:55.449041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.761 [2024-10-14 14:41:55.449047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.761 [2024-10-14 14:41:55.459294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.761 [2024-10-14 14:41:55.459312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.761 [2024-10-14 14:41:55.459318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:14.761 [2024-10-14 14:41:55.468082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.761 [2024-10-14 14:41:55.468101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.761 [2024-10-14 14:41:55.468107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:14.761 [2024-10-14 14:41:55.477645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.761 [2024-10-14 14:41:55.477663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.761 [2024-10-14 14:41:55.477670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:14.761 [2024-10-14 14:41:55.484862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:14.761 [2024-10-14 14:41:55.484880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.761 [2024-10-14 14:41:55.484887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.495771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.495790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.495796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.504955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.504973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.504980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.509970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.509989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.509995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.518146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.518165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.518171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.527694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.527713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.527720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.533223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.533241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.533251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.541507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.541525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.541531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.546430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.546448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.546455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.558173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.558191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.558198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.564374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.564392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.564399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.573840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.573858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.573864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.583708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.583726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.583732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.591812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.591830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.591837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.603465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.603483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.603489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.613603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.613625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.613631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.623409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.623428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.623434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.631802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.631820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.631826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.024 [2024-10-14 14:41:55.636915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.024 [2024-10-14 14:41:55.636932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.024 [2024-10-14 14:41:55.636938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.025 [2024-10-14 14:41:55.645268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.025 [2024-10-14 14:41:55.645286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.025 [2024-10-14 14:41:55.645292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.025 [2024-10-14 14:41:55.656436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.025 [2024-10-14 14:41:55.656453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.025 [2024-10-14 14:41:55.656459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:15.025 [2024-10-14 14:41:55.667588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.025 [2024-10-14 14:41:55.667606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.025 [2024-10-14 14:41:55.667612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:15.025 [2024-10-14 14:41:55.676604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.025 [2024-10-14 14:41:55.676622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.025 [2024-10-14 14:41:55.676628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:15.025 3576.00 IOPS, 447.00 MiB/s [2024-10-14T12:41:55.752Z] [2024-10-14 14:41:55.688352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdf08f0) 00:28:15.025 [2024-10-14 14:41:55.688370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.025 [2024-10-14 14:41:55.688377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:15.025 00:28:15.025 Latency(us) 00:28:15.025 [2024-10-14T12:41:55.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.025 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:15.025 nvme0n1 : 2.01 3572.55 446.57 0.00 0.00 4472.93 614.40 14636.37 00:28:15.025 [2024-10-14T12:41:55.752Z] =================================================================================================================== 00:28:15.025 [2024-10-14T12:41:55.752Z] Total : 3572.55 446.57 0.00 0.00 4472.93 614.40 14636.37 00:28:15.025 { 00:28:15.025 "results": [ 00:28:15.025 { 00:28:15.025 "job": "nvme0n1", 00:28:15.025 "core_mask": "0x2", 00:28:15.025 "workload": "randread", 00:28:15.025 "status": "finished", 00:28:15.025 "queue_depth": 16, 00:28:15.025 "io_size": 131072, 00:28:15.025 "runtime": 2.006411, 00:28:15.025 "iops": 3572.548196755301, 00:28:15.025 "mibps": 446.5685245944126, 00:28:15.025 "io_failed": 0, 00:28:15.025 "io_timeout": 0, 00:28:15.025 "avg_latency_us": 4472.933571428571, 00:28:15.025 "min_latency_us": 614.4, 00:28:15.025 "max_latency_us": 14636.373333333333 00:28:15.025 } 00:28:15.025 ], 00:28:15.025 "core_count": 1 00:28:15.025 } 00:28:15.025 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:15.025 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:15.025 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:15.025 | .driver_specific 00:28:15.025 | .nvme_error 00:28:15.025 | .status_code 00:28:15.025 | .command_transient_transport_error' 00:28:15.025 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:15.286 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 231 > 0 )) 00:28:15.286 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3562588 00:28:15.286 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3562588 ']' 00:28:15.286 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3562588 00:28:15.286 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:15.286 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:15.286 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3562588 00:28:15.286 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:15.286 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:15.286 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3562588' 00:28:15.286 killing process with pid 3562588 00:28:15.286 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3562588 00:28:15.286 Received shutdown signal, test time was about 2.000000 seconds 00:28:15.286 00:28:15.286 Latency(us) 00:28:15.286 [2024-10-14T12:41:56.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.286 [2024-10-14T12:41:56.013Z] =================================================================================================================== 00:28:15.286 [2024-10-14T12:41:56.013Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:15.286 14:41:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3562588 00:28:15.547 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:15.547 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:15.547 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:15.547 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:15.547 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:15.547 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3563439 00:28:15.547 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3563439 /var/tmp/bperf.sock 00:28:15.547 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3563439 ']' 00:28:15.547 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:15.547 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:15.547 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.547 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:15.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:15.547 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.547 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.547 [2024-10-14 14:41:56.108645] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:28:15.547 [2024-10-14 14:41:56.108700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3563439 ] 00:28:15.547 [2024-10-14 14:41:56.185768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.547 [2024-10-14 14:41:56.214856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.487 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:16.487 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:16.487 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.487 14:41:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:16.487 14:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:16.487 14:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.487 14:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.487 14:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.488 14:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.488 14:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.747 nvme0n1 00:28:16.747 14:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:16.747 14:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.747 14:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.747 14:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.747 14:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:16.747 14:41:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.008 Running I/O for 2 seconds... 00:28:17.008 [2024-10-14 14:41:57.511944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.512332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.512358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.524318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.524649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.524667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.536648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.536971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.536988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.548997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.549184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.549200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.561291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.561580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.561596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.573585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.573920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.573936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.585853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.586148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.586165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.598172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.598488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.598504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.610447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.610743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.610760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.622869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.623195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.623211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.635133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.635422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.635439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.647435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.647726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.647742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.659704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.659993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.660009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.671986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.672308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.672324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.684258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.684557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.684573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.696507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.696819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.696835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.708787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.709092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.709113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.721072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.721383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.721399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.008 [2024-10-14 14:41:57.733351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.008 [2024-10-14 14:41:57.733637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.008 [2024-10-14 14:41:57.733653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.269 [2024-10-14 14:41:57.745754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.269 [2024-10-14 14:41:57.746055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.269 [2024-10-14 14:41:57.746075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.758016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.758344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.758360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.770284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.770573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.770588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.782522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.782816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.782831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.794798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.795107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.795123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.807096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.807409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.807425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.819355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.819641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.819657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.831607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.831885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.831901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.843867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.844176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.844192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.856122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.856455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.856470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.868388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.868688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.868704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.880636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.880921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.880937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.892877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.893194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.893210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.905128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.905409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.905425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.917372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.917749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.917765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.929634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.929922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.929938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.941901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.942171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.942187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.954146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.954435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.954450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.966384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.966682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.966698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.978629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.978951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.978968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.270 [2024-10-14 14:41:57.990884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.270 [2024-10-14 14:41:57.991066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.270 [2024-10-14 14:41:57.991082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.531 [2024-10-14 14:41:58.003136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.531 [2024-10-14 14:41:58.003414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.531 [2024-10-14 14:41:58.003429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.531 [2024-10-14 14:41:58.015387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.531 [2024-10-14 14:41:58.015570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.531 [2024-10-14 14:41:58.015585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.531 [2024-10-14 14:41:58.027625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.531 [2024-10-14 14:41:58.027962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.531 [2024-10-14 14:41:58.027978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.531 [2024-10-14 14:41:58.039893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.531 [2024-10-14 14:41:58.040173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.531 [2024-10-14 14:41:58.040189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.531 [2024-10-14 14:41:58.052143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.531 [2024-10-14 14:41:58.052443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.531 [2024-10-14 14:41:58.052458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.531 [2024-10-14 14:41:58.064363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.531 [2024-10-14 14:41:58.064662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.531 [2024-10-14 14:41:58.064678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.531 [2024-10-14 14:41:58.076631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.531 [2024-10-14 14:41:58.076995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.531 [2024-10-14 14:41:58.077011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.531 [2024-10-14 14:41:58.088852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.531 [2024-10-14 14:41:58.089154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.531 [2024-10-14 14:41:58.089170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.532 [2024-10-14 14:41:58.101327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.532 [2024-10-14 14:41:58.101643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.532 [2024-10-14 14:41:58.101659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.532 [2024-10-14 14:41:58.113548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.532 [2024-10-14 14:41:58.113844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.532 [2024-10-14 14:41:58.113860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.532 [2024-10-14 14:41:58.125833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.532 [2024-10-14 14:41:58.126155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.532 [2024-10-14 14:41:58.126171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.532 [2024-10-14 14:41:58.138067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.532 [2024-10-14 14:41:58.138359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.532 [2024-10-14 14:41:58.138378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.532 [2024-10-14 14:41:58.150347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.532 [2024-10-14 14:41:58.150667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.532 [2024-10-14 14:41:58.150682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.532 [2024-10-14 14:41:58.162584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.532 [2024-10-14 14:41:58.162884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.532 [2024-10-14 14:41:58.162900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.532 [2024-10-14 14:41:58.174853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.532 [2024-10-14 14:41:58.175032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.532 [2024-10-14 14:41:58.175047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.532 [2024-10-14 14:41:58.187083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.532 [2024-10-14 14:41:58.187394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.532 [2024-10-14 14:41:58.187410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.532 [2024-10-14 14:41:58.199355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.532 [2024-10-14 14:41:58.199644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.532 [2024-10-14 14:41:58.199660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.532 [2024-10-14 14:41:58.211603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.532 [2024-10-14 14:41:58.211934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.532 [2024-10-14 14:41:58.211950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.532 [2024-10-14 14:41:58.223856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.532 [2024-10-14 14:41:58.224036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.532 [2024-10-14 14:41:58.224052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.532 [2024-10-14 14:41:58.236132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.532 [2024-10-14 14:41:58.236424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.532 [2024-10-14 14:41:58.236440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.532 [2024-10-14 14:41:58.248363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.532 [2024-10-14 14:41:58.248659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.532 [2024-10-14 14:41:58.248675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.532 [2024-10-14 14:41:58.260619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.532 [2024-10-14 14:41:58.260910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.532 [2024-10-14 14:41:58.260925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.793 [2024-10-14 14:41:58.272870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.793 [2024-10-14 14:41:58.273183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.793 [2024-10-14 14:41:58.273199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.793 [2024-10-14 14:41:58.285122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.793 [2024-10-14 14:41:58.285436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.793 [2024-10-14 14:41:58.285451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.793 [2024-10-14 14:41:58.297380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.793 [2024-10-14 14:41:58.297689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.793 [2024-10-14 14:41:58.297705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.793 [2024-10-14 14:41:58.309629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.793 [2024-10-14 14:41:58.309948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.793 [2024-10-14 14:41:58.309963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.793 [2024-10-14 14:41:58.321874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.322199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.322214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.334131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.334425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.334440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.346356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.346637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.346653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.358589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.358960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.358975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.370845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.371196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.371212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.383081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.383393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.383409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.395328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.395698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.395714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.407584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.407881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.407897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.419831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.420119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.420135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.432108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.432399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.432415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.444351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.444656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.444672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.456592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.456933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.456949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.468824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.469136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.469152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.481057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.481370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.481386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.493301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.493589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.493605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 20708.00 IOPS, 80.89 MiB/s [2024-10-14T12:41:58.521Z] [2024-10-14 14:41:58.505547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.505843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.505859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:17.794 [2024-10-14 14:41:58.517782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:17.794 [2024-10-14 14:41:58.518074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:17.794 [2024-10-14 14:41:58.518090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.530045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.530353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.530369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.542298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.542601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.542617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.554552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.554851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.554867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.566804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.567089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.567107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.579058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.579384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.579399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.591335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.591613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.591629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.603583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.603761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.603777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.615785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.616079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.616095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.628152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.628436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.628452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.640441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.640759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.640775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.652676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.652999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.653015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.664883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.665188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.665204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.677144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.677463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.677479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.689358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.689654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.689670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.701627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.701918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.701934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.713860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.714162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.714178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.726112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.726432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.726448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.738360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.738672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.738687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.750616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.750928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.750944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.762959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.763279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.763295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.056 [2024-10-14 14:41:58.775225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.056 [2024-10-14 14:41:58.775533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.056 [2024-10-14 14:41:58.775550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.318 [2024-10-14 14:41:58.787479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.318 [2024-10-14 14:41:58.787769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.318 [2024-10-14 14:41:58.787785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.318 [2024-10-14 14:41:58.799735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.318 [2024-10-14 14:41:58.800015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.318 [2024-10-14 14:41:58.800030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.318 [2024-10-14 14:41:58.811978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.318 [2024-10-14 14:41:58.812276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.318 [2024-10-14 14:41:58.812292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.318 [2024-10-14 14:41:58.824258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.318 [2024-10-14 14:41:58.824564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.318 [2024-10-14 14:41:58.824580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.318 [2024-10-14 14:41:58.836514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.318 [2024-10-14 14:41:58.836822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.318 [2024-10-14 14:41:58.836837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.318 [2024-10-14 14:41:58.848769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:58.849073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:58.849089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:58.861012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:58.861318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:58.861334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:58.873267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:58.873548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:58.873565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:58.885523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:58.885836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:58.885855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:58.897782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:58.898075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:58.898091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:58.910023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:58.910322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:58.910338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:58.922394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:58.922709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:58.922725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:58.934653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:58.934934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:58.934950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:58.946914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:58.947283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:58.947299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:58.959180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:58.959495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:58.959511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:58.971435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:58.971727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:58.971743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:58.983703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:58.984027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:58.984043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:58.995969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:58.996314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:58.996333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:59.008234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:59.008530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:59.008546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:59.020511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:59.020831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:59.020847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:59.032766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:59.033094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:59.033111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.319 [2024-10-14 14:41:59.045024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.319 [2024-10-14 14:41:59.045395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.319 [2024-10-14 14:41:59.045411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.057278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.057554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.057569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.069531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.069709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.069724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.081780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.082061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.082082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.094033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.094343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.094359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.106470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.106757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.106772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.118750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.119043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.119059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.131001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.131337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.131353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.143238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.143543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.143559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.155527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.155849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.155864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.167753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.168048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.168068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.180033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.180357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.180373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.192270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.192583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.192599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.204540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.204856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.204872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.216799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.217091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.217107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.229069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.229352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.229368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.241333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.241609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.241625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.253587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.253905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.253921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.265813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.266097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.266113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.278138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.278320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.278335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.290397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.290685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.290701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.582 [2024-10-14 14:41:59.302649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.582 [2024-10-14 14:41:59.302966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.582 [2024-10-14 14:41:59.302982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.844 [2024-10-14 14:41:59.314912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.844 [2024-10-14 14:41:59.315207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.844 [2024-10-14 14:41:59.315226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.844 [2024-10-14 14:41:59.327179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.844 [2024-10-14 14:41:59.327479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.844 [2024-10-14 14:41:59.327494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.844 [2024-10-14 14:41:59.339442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.844 [2024-10-14 14:41:59.339751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.844 [2024-10-14 14:41:59.339767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.844 [2024-10-14 14:41:59.351708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.844 [2024-10-14 14:41:59.352021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.844 [2024-10-14 14:41:59.352037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.844 [2024-10-14 14:41:59.363963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.844 [2024-10-14 14:41:59.364259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.844 [2024-10-14 14:41:59.364275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.844 [2024-10-14 14:41:59.376228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.844 [2024-10-14 14:41:59.376539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.844 [2024-10-14 14:41:59.376554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.844 [2024-10-14 14:41:59.388485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.844 [2024-10-14 14:41:59.388763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.844 [2024-10-14 14:41:59.388778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.844 [2024-10-14 14:41:59.400740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.844 [2024-10-14 14:41:59.401069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.844 [2024-10-14 14:41:59.401085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.844 [2024-10-14 14:41:59.413001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.845 [2024-10-14 14:41:59.413313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.845 [2024-10-14 14:41:59.413329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.845 [2024-10-14 14:41:59.425269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.845 [2024-10-14 14:41:59.425587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.845 [2024-10-14 14:41:59.425606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.845 [2024-10-14 14:41:59.437543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.845 [2024-10-14 14:41:59.437917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.845 [2024-10-14 14:41:59.437933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.845 [2024-10-14 14:41:59.449799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.845 [2024-10-14 14:41:59.450089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.845 [2024-10-14 14:41:59.450105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.845 [2024-10-14 14:41:59.462043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.845 [2024-10-14 14:41:59.462353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.845 [2024-10-14 14:41:59.462369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.845 [2024-10-14 14:41:59.474297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.845 [2024-10-14 14:41:59.474622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.845 [2024-10-14 14:41:59.474638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.845 [2024-10-14 14:41:59.486551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.845 [2024-10-14 14:41:59.486826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.845 [2024-10-14 14:41:59.486842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.845 [2024-10-14 14:41:59.498799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61aa80) with pdu=0x2000166fda78 00:28:18.845 [2024-10-14 14:41:59.499806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.845 [2024-10-14 14:41:59.499823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:18.845 20777.50 IOPS, 81.16 MiB/s 00:28:18.845 Latency(us) 00:28:18.845 [2024-10-14T12:41:59.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.845 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:18.845 nvme0n1 : 2.01 20779.96 81.17 0.00 0.00 6148.06 4969.81 12615.68 00:28:18.845 [2024-10-14T12:41:59.572Z] =================================================================================================================== 00:28:18.845 [2024-10-14T12:41:59.572Z] Total : 20779.96 81.17 0.00 0.00 6148.06 4969.81 12615.68 00:28:18.845 { 00:28:18.845 "results": [ 00:28:18.845 { 00:28:18.845 "job": "nvme0n1", 00:28:18.845 "core_mask": "0x2", 00:28:18.845 "workload": "randwrite", 00:28:18.845 "status": "finished", 00:28:18.845 "queue_depth": 128, 00:28:18.845 "io_size": 4096, 00:28:18.845 "runtime": 2.00703, 00:28:18.845 "iops": 20779.95844606209, 00:28:18.845 "mibps": 81.17171267993004, 00:28:18.845 "io_failed": 0, 00:28:18.845 "io_timeout": 0, 00:28:18.845 "avg_latency_us": 6148.063328378011, 00:28:18.845 "min_latency_us": 4969.8133333333335, 00:28:18.845 "max_latency_us": 12615.68 00:28:18.845 } 00:28:18.845 ], 00:28:18.845 "core_count": 1 00:28:18.845 } 00:28:18.845 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:18.845 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:18.845 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:18.845 | .driver_specific 00:28:18.845 | .nvme_error 00:28:18.845 | .status_code 00:28:18.845 | .command_transient_transport_error' 00:28:18.845 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:19.106 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:28:19.106 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3563439 00:28:19.106 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3563439 ']' 00:28:19.106 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3563439 00:28:19.106 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:19.106 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:19.106 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3563439 00:28:19.106 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:19.106 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:19.106 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3563439' 00:28:19.106 killing process with pid 3563439 00:28:19.106 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3563439 00:28:19.106 Received shutdown signal, test time was about 2.000000 seconds 00:28:19.106 00:28:19.106 Latency(us) 00:28:19.106 [2024-10-14T12:41:59.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.106 [2024-10-14T12:41:59.833Z] =================================================================================================================== 00:28:19.106 [2024-10-14T12:41:59.833Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.106 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3563439 00:28:19.367 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:19.367 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:19.367 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:19.367 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:19.367 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:19.367 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3564129 00:28:19.367 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3564129 /var/tmp/bperf.sock 00:28:19.367 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3564129 ']' 00:28:19.367 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:19.367 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.367 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.367 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.367 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.367 14:41:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.368 [2024-10-14 14:41:59.921824] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:28:19.368 [2024-10-14 14:41:59.921879] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3564129 ] 00:28:19.368 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:19.368 Zero copy mechanism will not be used. 00:28:19.368 [2024-10-14 14:41:59.998509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.368 [2024-10-14 14:42:00.029953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.310 14:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:20.310 14:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:20.310 14:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.310 14:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:20.310 14:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:20.310 14:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.310 14:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.310 14:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.310 14:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.310 14:42:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.571 nvme0n1 00:28:20.571 14:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:20.571 14:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.571 14:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:20.571 14:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.571 14:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:20.571 14:42:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:20.571 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:20.571 Zero copy mechanism will not be used. 00:28:20.571 Running I/O for 2 seconds... 00:28:20.571 [2024-10-14 14:42:01.256594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.571 [2024-10-14 14:42:01.256822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-10-14 14:42:01.256850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.571 [2024-10-14 14:42:01.261579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.571 [2024-10-14 14:42:01.261800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-10-14 14:42:01.261821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.571 [2024-10-14 14:42:01.267447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.571 [2024-10-14 14:42:01.267662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-10-14 14:42:01.267681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.571 [2024-10-14 14:42:01.274302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.571 [2024-10-14 14:42:01.274511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.571 [2024-10-14 14:42:01.274529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.571 [2024-10-14 14:42:01.280225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.571 [2024-10-14 14:42:01.280427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.572 [2024-10-14 14:42:01.280445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.572 [2024-10-14 14:42:01.287273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.572 [2024-10-14 14:42:01.287474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.572 [2024-10-14 14:42:01.287492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.572 [2024-10-14 14:42:01.291903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.572 [2024-10-14 14:42:01.292102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.572 [2024-10-14 14:42:01.292119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.572 [2024-10-14 14:42:01.297076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.572 [2024-10-14 14:42:01.297282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.572 [2024-10-14 14:42:01.297300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.572 [2024-10-14 14:42:01.301356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.833 [2024-10-14 14:42:01.301559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.833 [2024-10-14 14:42:01.301577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.833 [2024-10-14 14:42:01.305713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.833 [2024-10-14 14:42:01.305913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.305935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.310333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.310536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.310553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.316321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.316646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.316664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.321320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.321522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.321540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.325479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.325681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.325698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.329396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.329598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.329615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.333598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.333797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.333815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.339475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.339835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.339852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.345107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.345310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.345327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.349545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.349752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.349769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.356341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.356630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.356648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.361113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.361305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.361322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.365518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.365708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.365725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.371925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.372119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.372136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.376076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.376265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.376282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.381907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.382227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.382245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.390641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.390833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.390850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.395511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.395700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.395717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.402984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.403179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.403197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.407292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.407483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.407500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.411633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.411822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.411839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.416511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.416709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.416726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.421910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.422104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.422120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.426247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.426438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.426455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.433646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.434001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.434019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.440684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.440873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.440890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.445183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.445376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.445396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.451337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.451593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.451611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.834 [2024-10-14 14:42:01.460223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.834 [2024-10-14 14:42:01.460418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.834 [2024-10-14 14:42:01.460436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.835 [2024-10-14 14:42:01.466442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.835 [2024-10-14 14:42:01.466632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.835 [2024-10-14 14:42:01.466649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.835 [2024-10-14 14:42:01.470268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.835 [2024-10-14 14:42:01.470456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.835 [2024-10-14 14:42:01.470474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.835 [2024-10-14 14:42:01.474154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.835 [2024-10-14 14:42:01.474336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.835 [2024-10-14 14:42:01.474354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.835 [2024-10-14 14:42:01.481012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.835 [2024-10-14 14:42:01.481197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.835 [2024-10-14 14:42:01.481215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.835 [2024-10-14 14:42:01.487447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.835 [2024-10-14 14:42:01.487714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.835 [2024-10-14 14:42:01.487733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.835 [2024-10-14 14:42:01.495407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.835 [2024-10-14 14:42:01.495683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.835 [2024-10-14 14:42:01.495701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.835 [2024-10-14 14:42:01.504687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.835 [2024-10-14 14:42:01.504881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.835 [2024-10-14 14:42:01.504900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.835 [2024-10-14 14:42:01.514814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.835 [2024-10-14 14:42:01.515036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.835 [2024-10-14 14:42:01.515054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.835 [2024-10-14 14:42:01.526367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.835 [2024-10-14 14:42:01.526726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.835 [2024-10-14 14:42:01.526743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:20.835 [2024-10-14 14:42:01.535656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.835 [2024-10-14 14:42:01.535952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.835 [2024-10-14 14:42:01.535969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:20.835 [2024-10-14 14:42:01.542619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.835 [2024-10-14 14:42:01.542802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.835 [2024-10-14 14:42:01.542819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:20.835 [2024-10-14 14:42:01.551891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.835 [2024-10-14 14:42:01.552060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.835 [2024-10-14 14:42:01.552083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:20.835 [2024-10-14 14:42:01.559410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:20.835 [2024-10-14 14:42:01.559583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:20.835 [2024-10-14 14:42:01.559602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.096 [2024-10-14 14:42:01.565159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.096 [2024-10-14 14:42:01.565383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.096 [2024-10-14 14:42:01.565399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.096 [2024-10-14 14:42:01.572660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.096 [2024-10-14 14:42:01.572836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.096 [2024-10-14 14:42:01.572853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.096 [2024-10-14 14:42:01.582995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.096 [2024-10-14 14:42:01.583236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.096 [2024-10-14 14:42:01.583253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.096 [2024-10-14 14:42:01.592926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.096 [2024-10-14 14:42:01.593158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.096 [2024-10-14 14:42:01.593175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.096 [2024-10-14 14:42:01.602432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.096 [2024-10-14 14:42:01.602635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.096 [2024-10-14 14:42:01.602652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.096 [2024-10-14 14:42:01.607615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.096 [2024-10-14 14:42:01.607788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.096 [2024-10-14 14:42:01.607806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.611967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.612148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.612165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.615784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.615957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.615974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.619892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.620073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.620090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.624801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.624971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.624988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.632814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.633122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.633143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.642068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.642334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.642351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.651310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.651480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.651497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.661628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.661810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.661828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.672108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.672358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.672375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.682122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.682297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.682315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.691000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.691225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.691243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.700437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.700606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.700623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.709991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.710251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.710268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.719887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.720337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.720354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.729503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.729792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.729809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.737171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.737624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.737642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.748701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.748971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.748988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.756126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.756290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.756308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.759801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.759965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.759982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.763276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.763449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.763466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.767189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.767351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.767367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.770966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.771136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.771153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.774541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.774703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.774721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.777966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.778133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.778150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.781384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.781546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.781563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.784786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.784949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.784966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.788192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.788355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.788371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.791593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.097 [2024-10-14 14:42:01.791752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.097 [2024-10-14 14:42:01.791769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.097 [2024-10-14 14:42:01.794986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.098 [2024-10-14 14:42:01.795151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.098 [2024-10-14 14:42:01.795168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.098 [2024-10-14 14:42:01.798378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.098 [2024-10-14 14:42:01.798537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.098 [2024-10-14 14:42:01.798554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.098 [2024-10-14 14:42:01.802237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.098 [2024-10-14 14:42:01.802406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.098 [2024-10-14 14:42:01.802426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.098 [2024-10-14 14:42:01.807022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.098 [2024-10-14 14:42:01.807352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.098 [2024-10-14 14:42:01.807370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.098 [2024-10-14 14:42:01.815870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.098 [2024-10-14 14:42:01.816111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.098 [2024-10-14 14:42:01.816128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.826116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.826359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.826376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.836694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.836953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.836971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.847013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.847283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.847300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.856890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.857096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.857113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.867147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.867413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.867431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.877449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.877722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.877739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.887534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.887727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.887745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.898082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.898318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.898335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.907531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.907928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.907945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.917854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.918185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.918202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.927700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.927942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.927959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.937928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.938185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.938202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.948153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.948443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.948461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.958681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.958897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.958913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.969319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.969639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.969656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.979422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.979704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.360 [2024-10-14 14:42:01.979721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.360 [2024-10-14 14:42:01.990394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.360 [2024-10-14 14:42:01.990456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:01.990472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.001154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.001425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.001442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.012877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.013030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.013047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.019586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.019637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.019653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.025907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.026009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.026025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.029989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.030040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.030056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.036125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.036197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.036212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.039958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.040011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.040029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.043851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.043907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.043923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.048139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.048400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.048416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.053860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.053914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.053930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.058622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.058692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.058707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.062221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.062272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.062288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.065827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.065896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.065911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.073358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.073600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.073615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.078069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.078148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.078164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.081833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.081898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.081913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.361 [2024-10-14 14:42:02.085463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.361 [2024-10-14 14:42:02.085513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.361 [2024-10-14 14:42:02.085528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.623 [2024-10-14 14:42:02.091431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.091488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.091503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.098274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.098347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.098362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.102572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.102643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.102659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.106864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.106930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.106945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.110981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.111044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.111060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.114786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.114837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.114853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.118584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.118643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.118658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.122471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.122527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.122543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.126212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.126275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.126290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.130213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.130276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.130291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.135499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.135694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.135710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.142008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.142075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.142090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.146257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.146310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.146325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.153475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.153527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.153543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.158594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.158648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.158664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.165473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.165528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.165546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.171074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.171134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.171150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.176401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.176473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.176489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.181966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.182028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.182043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.186907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.186959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.186974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.190969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.191023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.191038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.195600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.195652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.195667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.202877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.202936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.202952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.206806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.206863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.206879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.210915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.210972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.624 [2024-10-14 14:42:02.210988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.624 [2024-10-14 14:42:02.214966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.624 [2024-10-14 14:42:02.215026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.215041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.219019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.219074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.219089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.227132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.227203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.227218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.230695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.230749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.230765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.236136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.236428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.236444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.241832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.242978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.242995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.625 4804.00 IOPS, 600.50 MiB/s [2024-10-14T12:42:02.352Z] [2024-10-14 14:42:02.246658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.246719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.246735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.250799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.250853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.250871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.257275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.257381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.257397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.265365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.265554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.265571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.273903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.274015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.274031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.279463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.279535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.279551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.285151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.285226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.285242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.292987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.293242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.293258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.301559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.301624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.301639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.309226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.309316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.309331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.318033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.318356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.318373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.326732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.326803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.326819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.335783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.336019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.336035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.625 [2024-10-14 14:42:02.344786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.625 [2024-10-14 14:42:02.345013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.625 [2024-10-14 14:42:02.345030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.354090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.354369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.354386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.363257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.363310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.363326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.371815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.371866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.371881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.380479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.380723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.380739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.388602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.388662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.388677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.397753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.397828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.397843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.404550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.404787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.404803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.413893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.413951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.413966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.425343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.425402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.425418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.433430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.433644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.433661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.442233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.442497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.442514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.450837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.450997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.451012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.459942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.460189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.460206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.468816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.468877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.468895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.476202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.476257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.476272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.485966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.486072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.486088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.494238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.494432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.494447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.500585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.500808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.887 [2024-10-14 14:42:02.500824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.887 [2024-10-14 14:42:02.510200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.887 [2024-10-14 14:42:02.510444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.510460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.519974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.520037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.520053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.528462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.528519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.528535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.534801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.534870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.534887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.540527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.540607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.540626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.545287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.545348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.545363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.549705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.549761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.549777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.554202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.554270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.554285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.558487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.558539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.558555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.562773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.562852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.562867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.567938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.568010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.568025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.575128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.575405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.575421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.582295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.582541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.582557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.587792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.587857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.587873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.593534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.593588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.593603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.597752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.597823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.597838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.604922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.605104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.605120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:21.888 [2024-10-14 14:42:02.614707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:21.888 [2024-10-14 14:42:02.615021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:21.888 [2024-10-14 14:42:02.615037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.150 [2024-10-14 14:42:02.620562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.150 [2024-10-14 14:42:02.620620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.150 [2024-10-14 14:42:02.620636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.628638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.628690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.628706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.633558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.633621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.633636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.640293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.640351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.640366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.646803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.647046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.647068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.653296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.653360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.653375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.659115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.659342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.659358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.665755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.665808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.665824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.670738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.670816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.670832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.678191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.678439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.678455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.687616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.687699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.687714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.694785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.694841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.694856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.703288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.703466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.703488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.714158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.714221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.714237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.725784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.726054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.726076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.736971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.737255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.737271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.748915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.748983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.748998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.760877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.760956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.760972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.772683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.773011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.773028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.784114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.784388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.784404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.795566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.795852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.795868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.807736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.808028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.808044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.818922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.819283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.819300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.829880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.830272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.830288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.841129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.841452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.841469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.851916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.852215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.852231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.863779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.863967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.151 [2024-10-14 14:42:02.863983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.151 [2024-10-14 14:42:02.874772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.151 [2024-10-14 14:42:02.875177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.152 [2024-10-14 14:42:02.875192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.414 [2024-10-14 14:42:02.886307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.414 [2024-10-14 14:42:02.886575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.414 [2024-10-14 14:42:02.886590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.414 [2024-10-14 14:42:02.897392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.414 [2024-10-14 14:42:02.897680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.414 [2024-10-14 14:42:02.897697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.414 [2024-10-14 14:42:02.908995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.414 [2024-10-14 14:42:02.909201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.414 [2024-10-14 14:42:02.909217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.414 [2024-10-14 14:42:02.919936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.414 [2024-10-14 14:42:02.920011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.414 [2024-10-14 14:42:02.920026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.414 [2024-10-14 14:42:02.931314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.414 [2024-10-14 14:42:02.931553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.414 [2024-10-14 14:42:02.931569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:02.942951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:02.943193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:02.943210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:02.954248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:02.954520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:02.954536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:02.966256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:02.966311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:02.966326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:02.978011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:02.978423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:02.978439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:02.989727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:02.989999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:02.990015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.001177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.001483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.001502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.012422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.012704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.012720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.023599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.023875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.023891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.033977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.034262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.034278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.045708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.045923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.045939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.054103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.054175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.054191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.059977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.060053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.060076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.069627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.069937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.069953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.079561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.079621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.079636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.086403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.086691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.086708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.095059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.095133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.095148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.102380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.102452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.102468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.109894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.109962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.109978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.114950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.115056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.115076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.123406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.123507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.123522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.133783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.133860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.133876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.415 [2024-10-14 14:42:03.138633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.415 [2024-10-14 14:42:03.138712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.415 [2024-10-14 14:42:03.138727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.145848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.145923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.145939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.151754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.151833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.151849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.158706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.158778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.158793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.167989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.168301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.168317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.174229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.174292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.174308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.179744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.179793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.179808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.188178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.188468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.188484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.192577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.192631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.192646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.199580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.199664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.199679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.206588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.206645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.206663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.211882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.211946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.211961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.218633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.218688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.218703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.223822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.224076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.224092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.231276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.231333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.231348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.235210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.235271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.235286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:22.677 [2024-10-14 14:42:03.241010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x61adc0) with pdu=0x2000166fef90 00:28:22.677 [2024-10-14 14:42:03.241087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:22.677 [2024-10-14 14:42:03.241103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:22.677 4282.00 IOPS, 535.25 MiB/s 00:28:22.677 Latency(us) 00:28:22.677 [2024-10-14T12:42:03.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.677 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:22.677 nvme0n1 : 2.00 4282.44 535.30 0.00 0.00 3731.13 1536.00 14854.83 00:28:22.677 [2024-10-14T12:42:03.404Z] =================================================================================================================== 00:28:22.677 [2024-10-14T12:42:03.404Z] Total : 4282.44 535.30 0.00 0.00 3731.13 1536.00 14854.83 00:28:22.677 { 00:28:22.677 "results": [ 00:28:22.677 { 00:28:22.677 "job": "nvme0n1", 00:28:22.677 "core_mask": "0x2", 00:28:22.677 "workload": "randwrite", 00:28:22.677 "status": "finished", 00:28:22.677 "queue_depth": 16, 00:28:22.677 "io_size": 131072, 00:28:22.677 "runtime": 2.004466, 00:28:22.677 "iops": 4282.4373174700895, 00:28:22.677 "mibps": 535.3046646837612, 00:28:22.677 "io_failed": 0, 00:28:22.677 "io_timeout": 0, 00:28:22.677 "avg_latency_us": 3731.1296178937555, 00:28:22.677 "min_latency_us": 1536.0, 00:28:22.677 "max_latency_us": 14854.826666666666 00:28:22.677 } 00:28:22.677 ], 00:28:22.677 "core_count": 1 00:28:22.677 } 00:28:22.677 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:22.677 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:22.677 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:22.677 | .driver_specific 00:28:22.677 | .nvme_error 00:28:22.677 | .status_code 00:28:22.677 | .command_transient_transport_error' 00:28:22.677 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:22.938 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 276 > 0 )) 00:28:22.938 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3564129 00:28:22.938 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3564129 ']' 00:28:22.938 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3564129 00:28:22.938 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:22.938 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:22.938 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3564129 00:28:22.938 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:22.938 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:22.938 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3564129' 00:28:22.938 killing process with pid 3564129 00:28:22.938 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3564129 00:28:22.938 Received shutdown signal, test time was about 2.000000 seconds 00:28:22.938 00:28:22.938 Latency(us) 00:28:22.938 [2024-10-14T12:42:03.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.938 [2024-10-14T12:42:03.665Z] =================================================================================================================== 00:28:22.938 [2024-10-14T12:42:03.666Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:22.939 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3564129 00:28:22.939 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3561857 00:28:22.939 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3561857 ']' 00:28:22.939 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3561857 00:28:22.939 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:22.939 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:22.939 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3561857 00:28:23.199 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:23.199 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:23.199 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3561857' 00:28:23.199 killing process with pid 3561857 00:28:23.199 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3561857 00:28:23.199 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3561857 00:28:23.199 00:28:23.199 real 0m15.236s 00:28:23.199 user 0m30.637s 00:28:23.199 sys 0m3.402s 00:28:23.199 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:23.199 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.199 ************************************ 00:28:23.199 END TEST nvmf_digest_error 00:28:23.199 ************************************ 00:28:23.199 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:23.199 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:23.199 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:23.199 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:23.199 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.200 rmmod nvme_tcp 00:28:23.200 rmmod nvme_fabrics 00:28:23.200 rmmod nvme_keyring 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 3561857 ']' 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 3561857 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3561857 ']' 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3561857 00:28:23.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3561857) - No such process 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3561857 is not found' 00:28:23.200 Process with pid 3561857 is not found 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.200 14:42:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.745 14:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.745 00:28:25.745 real 0m41.851s 00:28:25.745 user 1m5.637s 00:28:25.745 sys 0m12.685s 00:28:25.745 14:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:25.745 14:42:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:25.745 ************************************ 00:28:25.745 END TEST nvmf_digest 00:28:25.745 ************************************ 00:28:25.745 14:42:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:25.745 14:42:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:25.745 14:42:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:25.745 14:42:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:25.745 14:42:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:25.745 14:42:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:25.745 14:42:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.745 ************************************ 00:28:25.745 START TEST nvmf_bdevperf 00:28:25.745 ************************************ 00:28:25.745 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:25.745 * Looking for test storage... 00:28:25.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:25.745 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:25.745 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:25.745 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:25.745 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:25.745 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.745 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:25.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.746 --rc genhtml_branch_coverage=1 00:28:25.746 --rc genhtml_function_coverage=1 00:28:25.746 --rc genhtml_legend=1 00:28:25.746 --rc geninfo_all_blocks=1 00:28:25.746 --rc geninfo_unexecuted_blocks=1 00:28:25.746 00:28:25.746 ' 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:25.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.746 --rc genhtml_branch_coverage=1 00:28:25.746 --rc genhtml_function_coverage=1 00:28:25.746 --rc genhtml_legend=1 00:28:25.746 --rc geninfo_all_blocks=1 00:28:25.746 --rc geninfo_unexecuted_blocks=1 00:28:25.746 00:28:25.746 ' 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:25.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.746 --rc genhtml_branch_coverage=1 00:28:25.746 --rc genhtml_function_coverage=1 00:28:25.746 --rc genhtml_legend=1 00:28:25.746 --rc geninfo_all_blocks=1 00:28:25.746 --rc geninfo_unexecuted_blocks=1 00:28:25.746 00:28:25.746 ' 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:25.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.746 --rc genhtml_branch_coverage=1 00:28:25.746 --rc genhtml_function_coverage=1 00:28:25.746 --rc genhtml_legend=1 00:28:25.746 --rc geninfo_all_blocks=1 00:28:25.746 --rc geninfo_unexecuted_blocks=1 00:28:25.746 00:28:25.746 ' 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:25.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.746 14:42:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:33.883 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.883 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:33.884 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:33.884 Found net devices under 0000:31:00.0: cvl_0_0 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:33.884 Found net devices under 0000:31:00.1: cvl_0_1 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:28:33.884 00:28:33.884 --- 10.0.0.2 ping statistics --- 00:28:33.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.884 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:28:33.884 00:28:33.884 --- 10.0.0.1 ping statistics --- 00:28:33.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.884 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3569706 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3569706 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3569706 ']' 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:33.884 14:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:33.884 [2024-10-14 14:42:13.963403] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:28:33.884 [2024-10-14 14:42:13.963468] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.884 [2024-10-14 14:42:14.053441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:33.884 [2024-10-14 14:42:14.106208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.884 [2024-10-14 14:42:14.106258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.884 [2024-10-14 14:42:14.106267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.884 [2024-10-14 14:42:14.106274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.884 [2024-10-14 14:42:14.106281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.884 [2024-10-14 14:42:14.108436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.884 [2024-10-14 14:42:14.108604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.884 [2024-10-14 14:42:14.108605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.145 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:34.145 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.146 [2024-10-14 14:42:14.811248] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.146 Malloc0 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.146 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:34.146 [2024-10-14 14:42:14.875088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.406 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.406 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:34.406 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:34.406 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:34.406 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:34.406 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:34.407 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:34.407 { 00:28:34.407 "params": { 00:28:34.407 "name": "Nvme$subsystem", 00:28:34.407 "trtype": "$TEST_TRANSPORT", 00:28:34.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.407 "adrfam": "ipv4", 00:28:34.407 "trsvcid": "$NVMF_PORT", 00:28:34.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.407 "hdgst": ${hdgst:-false}, 00:28:34.407 "ddgst": ${ddgst:-false} 00:28:34.407 }, 00:28:34.407 "method": "bdev_nvme_attach_controller" 00:28:34.407 } 00:28:34.407 EOF 00:28:34.407 )") 00:28:34.407 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:34.407 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:34.407 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:34.407 14:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:34.407 "params": { 00:28:34.407 "name": "Nvme1", 00:28:34.407 "trtype": "tcp", 00:28:34.407 "traddr": "10.0.0.2", 00:28:34.407 "adrfam": "ipv4", 00:28:34.407 "trsvcid": "4420", 00:28:34.407 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:34.407 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:34.407 "hdgst": false, 00:28:34.407 "ddgst": false 00:28:34.407 }, 00:28:34.407 "method": "bdev_nvme_attach_controller" 00:28:34.407 }' 00:28:34.407 [2024-10-14 14:42:14.939824] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:28:34.407 [2024-10-14 14:42:14.939878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3569792 ] 00:28:34.407 [2024-10-14 14:42:15.001330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.407 [2024-10-14 14:42:15.037961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.667 Running I/O for 1 seconds... 00:28:36.049 9298.00 IOPS, 36.32 MiB/s 00:28:36.049 Latency(us) 00:28:36.049 [2024-10-14T12:42:16.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.049 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:36.049 Verification LBA range: start 0x0 length 0x4000 00:28:36.049 Nvme1n1 : 1.01 9355.51 36.54 0.00 0.00 13622.14 2375.68 12506.45 00:28:36.049 [2024-10-14T12:42:16.776Z] =================================================================================================================== 00:28:36.049 [2024-10-14T12:42:16.776Z] Total : 9355.51 36.54 0.00 0.00 13622.14 2375.68 12506.45 00:28:36.049 14:42:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3570147 00:28:36.049 14:42:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:36.049 14:42:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:36.049 14:42:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:36.049 14:42:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:36.049 14:42:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:36.049 14:42:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:36.049 14:42:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:36.049 { 00:28:36.049 "params": { 00:28:36.049 "name": "Nvme$subsystem", 00:28:36.049 "trtype": "$TEST_TRANSPORT", 00:28:36.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.049 "adrfam": "ipv4", 00:28:36.049 "trsvcid": "$NVMF_PORT", 00:28:36.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.049 "hdgst": ${hdgst:-false}, 00:28:36.049 "ddgst": ${ddgst:-false} 00:28:36.049 }, 00:28:36.049 "method": "bdev_nvme_attach_controller" 00:28:36.049 } 00:28:36.049 EOF 00:28:36.049 )") 00:28:36.049 14:42:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:36.049 14:42:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:36.049 14:42:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:36.049 14:42:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:36.049 "params": { 00:28:36.049 "name": "Nvme1", 00:28:36.049 "trtype": "tcp", 00:28:36.049 "traddr": "10.0.0.2", 00:28:36.049 "adrfam": "ipv4", 00:28:36.049 "trsvcid": "4420", 00:28:36.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:36.049 "hdgst": false, 00:28:36.049 "ddgst": false 00:28:36.049 }, 00:28:36.049 "method": "bdev_nvme_attach_controller" 00:28:36.049 }' 00:28:36.049 [2024-10-14 14:42:16.572908] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:28:36.049 [2024-10-14 14:42:16.572964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3570147 ] 00:28:36.049 [2024-10-14 14:42:16.634795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.049 [2024-10-14 14:42:16.669621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.309 Running I/O for 15 seconds... 00:28:38.189 11138.00 IOPS, 43.51 MiB/s [2024-10-14T12:42:19.860Z] 11301.00 IOPS, 44.14 MiB/s [2024-10-14T12:42:19.860Z] 14:42:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3569706 00:28:39.133 14:42:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:39.133 [2024-10-14 14:42:19.535990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.133 [2024-10-14 14:42:19.536542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.133 [2024-10-14 14:42:19.536549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.536988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.536997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.134 [2024-10-14 14:42:19.537338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.134 [2024-10-14 14:42:19.537348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.135 [2024-10-14 14:42:19.537561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.537983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.537992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.538002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.538012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.538018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.538028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.538035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.538045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.135 [2024-10-14 14:42:19.538052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.135 [2024-10-14 14:42:19.538061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.136 [2024-10-14 14:42:19.538073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.136 [2024-10-14 14:42:19.538090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.136 [2024-10-14 14:42:19.538107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.136 [2024-10-14 14:42:19.538123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.136 [2024-10-14 14:42:19.538140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.136 [2024-10-14 14:42:19.538156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.136 [2024-10-14 14:42:19.538173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.136 [2024-10-14 14:42:19.538189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.136 [2024-10-14 14:42:19.538206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.136 [2024-10-14 14:42:19.538225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.136 [2024-10-14 14:42:19.538241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.136 [2024-10-14 14:42:19.538258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.136 [2024-10-14 14:42:19.538274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.136 [2024-10-14 14:42:19.538291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.136 [2024-10-14 14:42:19.538308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.136 [2024-10-14 14:42:19.538325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:39.136 [2024-10-14 14:42:19.538341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad8d70 is same with the state(6) to be set 00:28:39.136 [2024-10-14 14:42:19.538359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:39.136 [2024-10-14 14:42:19.538365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:39.136 [2024-10-14 14:42:19.538372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115224 len:8 PRP1 0x0 PRP2 0x0 00:28:39.136 [2024-10-14 14:42:19.538380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:39.136 [2024-10-14 14:42:19.538419] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ad8d70 was disconnected and freed. reset controller. 00:28:39.136 [2024-10-14 14:42:19.541965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.136 [2024-10-14 14:42:19.542016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.136 [2024-10-14 14:42:19.542692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.136 [2024-10-14 14:42:19.542709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.136 [2024-10-14 14:42:19.542722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.136 [2024-10-14 14:42:19.542942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.136 [2024-10-14 14:42:19.543169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.136 [2024-10-14 14:42:19.543179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.136 [2024-10-14 14:42:19.543189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.136 [2024-10-14 14:42:19.546744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.136 [2024-10-14 14:42:19.556149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.136 [2024-10-14 14:42:19.556811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.136 [2024-10-14 14:42:19.556849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.136 [2024-10-14 14:42:19.556861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.136 [2024-10-14 14:42:19.557111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.136 [2024-10-14 14:42:19.557333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.136 [2024-10-14 14:42:19.557342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.136 [2024-10-14 14:42:19.557351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.136 [2024-10-14 14:42:19.560897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.136 [2024-10-14 14:42:19.570102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.136 [2024-10-14 14:42:19.570751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.136 [2024-10-14 14:42:19.570788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.136 [2024-10-14 14:42:19.570799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.136 [2024-10-14 14:42:19.571039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.136 [2024-10-14 14:42:19.571271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.136 [2024-10-14 14:42:19.571281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.136 [2024-10-14 14:42:19.571289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.136 [2024-10-14 14:42:19.574835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.136 [2024-10-14 14:42:19.584050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.136 [2024-10-14 14:42:19.584699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.136 [2024-10-14 14:42:19.584737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.136 [2024-10-14 14:42:19.584748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.136 [2024-10-14 14:42:19.584986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.136 [2024-10-14 14:42:19.585215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.136 [2024-10-14 14:42:19.585225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.136 [2024-10-14 14:42:19.585237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.136 [2024-10-14 14:42:19.588781] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.136 [2024-10-14 14:42:19.597964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.136 [2024-10-14 14:42:19.598612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.136 [2024-10-14 14:42:19.598650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.136 [2024-10-14 14:42:19.598660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.136 [2024-10-14 14:42:19.598899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.136 [2024-10-14 14:42:19.599131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.136 [2024-10-14 14:42:19.599150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.136 [2024-10-14 14:42:19.599158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.136 [2024-10-14 14:42:19.602703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.136 [2024-10-14 14:42:19.611903] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.136 [2024-10-14 14:42:19.612539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.136 [2024-10-14 14:42:19.612577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.136 [2024-10-14 14:42:19.612588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.136 [2024-10-14 14:42:19.612827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.136 [2024-10-14 14:42:19.613049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.136 [2024-10-14 14:42:19.613057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.137 [2024-10-14 14:42:19.613075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.137 [2024-10-14 14:42:19.616619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.137 [2024-10-14 14:42:19.625814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.137 [2024-10-14 14:42:19.626469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.137 [2024-10-14 14:42:19.626506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.137 [2024-10-14 14:42:19.626517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.137 [2024-10-14 14:42:19.626755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.137 [2024-10-14 14:42:19.626977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.137 [2024-10-14 14:42:19.626985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.137 [2024-10-14 14:42:19.626993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.137 [2024-10-14 14:42:19.630545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.137 [2024-10-14 14:42:19.639744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.137 [2024-10-14 14:42:19.640392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.137 [2024-10-14 14:42:19.640429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.137 [2024-10-14 14:42:19.640440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.137 [2024-10-14 14:42:19.640679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.137 [2024-10-14 14:42:19.640901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.137 [2024-10-14 14:42:19.640909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.137 [2024-10-14 14:42:19.640917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.137 [2024-10-14 14:42:19.644469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.137 [2024-10-14 14:42:19.653661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.137 [2024-10-14 14:42:19.654238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.137 [2024-10-14 14:42:19.654258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.137 [2024-10-14 14:42:19.654266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.137 [2024-10-14 14:42:19.654485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.137 [2024-10-14 14:42:19.654704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.137 [2024-10-14 14:42:19.654712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.137 [2024-10-14 14:42:19.654719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.137 [2024-10-14 14:42:19.658264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.137 [2024-10-14 14:42:19.667650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.137 [2024-10-14 14:42:19.668214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.137 [2024-10-14 14:42:19.668231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.137 [2024-10-14 14:42:19.668239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.137 [2024-10-14 14:42:19.668457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.137 [2024-10-14 14:42:19.668675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.137 [2024-10-14 14:42:19.668683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.137 [2024-10-14 14:42:19.668690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.137 [2024-10-14 14:42:19.672229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.137 [2024-10-14 14:42:19.681626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.137 [2024-10-14 14:42:19.682162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.137 [2024-10-14 14:42:19.682178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.137 [2024-10-14 14:42:19.682185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.137 [2024-10-14 14:42:19.682409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.137 [2024-10-14 14:42:19.682627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.137 [2024-10-14 14:42:19.682635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.137 [2024-10-14 14:42:19.682643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.137 [2024-10-14 14:42:19.686184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.137 [2024-10-14 14:42:19.695568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.137 [2024-10-14 14:42:19.696224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.137 [2024-10-14 14:42:19.696262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.137 [2024-10-14 14:42:19.696273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.137 [2024-10-14 14:42:19.696511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.137 [2024-10-14 14:42:19.696733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.137 [2024-10-14 14:42:19.696742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.137 [2024-10-14 14:42:19.696749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.137 [2024-10-14 14:42:19.700298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.137 [2024-10-14 14:42:19.709491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.137 [2024-10-14 14:42:19.710174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.137 [2024-10-14 14:42:19.710211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.137 [2024-10-14 14:42:19.710223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.137 [2024-10-14 14:42:19.710465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.137 [2024-10-14 14:42:19.710687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.137 [2024-10-14 14:42:19.710696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.137 [2024-10-14 14:42:19.710703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.137 [2024-10-14 14:42:19.714258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.137 [2024-10-14 14:42:19.723437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.137 [2024-10-14 14:42:19.724035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.137 [2024-10-14 14:42:19.724080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.137 [2024-10-14 14:42:19.724091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.137 [2024-10-14 14:42:19.724329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.137 [2024-10-14 14:42:19.724552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.137 [2024-10-14 14:42:19.724560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.137 [2024-10-14 14:42:19.724572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.137 [2024-10-14 14:42:19.728121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.137 [2024-10-14 14:42:19.737311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.137 [2024-10-14 14:42:19.737862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.137 [2024-10-14 14:42:19.737900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.137 [2024-10-14 14:42:19.737910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.138 [2024-10-14 14:42:19.738158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.138 [2024-10-14 14:42:19.738382] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.138 [2024-10-14 14:42:19.738392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.138 [2024-10-14 14:42:19.738399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.138 [2024-10-14 14:42:19.741944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.138 [2024-10-14 14:42:19.751303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.138 [2024-10-14 14:42:19.751982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.138 [2024-10-14 14:42:19.752019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.138 [2024-10-14 14:42:19.752031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.138 [2024-10-14 14:42:19.752282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.138 [2024-10-14 14:42:19.752505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.138 [2024-10-14 14:42:19.752514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.138 [2024-10-14 14:42:19.752522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.138 [2024-10-14 14:42:19.756070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.138 [2024-10-14 14:42:19.765264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.138 [2024-10-14 14:42:19.765798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.138 [2024-10-14 14:42:19.765836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.138 [2024-10-14 14:42:19.765846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.138 [2024-10-14 14:42:19.766095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.138 [2024-10-14 14:42:19.766318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.138 [2024-10-14 14:42:19.766327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.138 [2024-10-14 14:42:19.766334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.138 [2024-10-14 14:42:19.769879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.138 [2024-10-14 14:42:19.779076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.138 [2024-10-14 14:42:19.779749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.138 [2024-10-14 14:42:19.779794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.138 [2024-10-14 14:42:19.779805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.138 [2024-10-14 14:42:19.780043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.138 [2024-10-14 14:42:19.780275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.138 [2024-10-14 14:42:19.780285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.138 [2024-10-14 14:42:19.780293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.138 [2024-10-14 14:42:19.783846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.138 [2024-10-14 14:42:19.793043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.138 [2024-10-14 14:42:19.793688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.138 [2024-10-14 14:42:19.793726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.138 [2024-10-14 14:42:19.793736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.138 [2024-10-14 14:42:19.793974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.138 [2024-10-14 14:42:19.794208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.138 [2024-10-14 14:42:19.794219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.138 [2024-10-14 14:42:19.794227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.138 [2024-10-14 14:42:19.797780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.138 [2024-10-14 14:42:19.806979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.138 [2024-10-14 14:42:19.807619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.138 [2024-10-14 14:42:19.807656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.138 [2024-10-14 14:42:19.807667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.138 [2024-10-14 14:42:19.807905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.138 [2024-10-14 14:42:19.808138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.138 [2024-10-14 14:42:19.808148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.138 [2024-10-14 14:42:19.808156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.138 [2024-10-14 14:42:19.811703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.138 [2024-10-14 14:42:19.820902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.138 [2024-10-14 14:42:19.821498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.138 [2024-10-14 14:42:19.821518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.138 [2024-10-14 14:42:19.821526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.138 [2024-10-14 14:42:19.821751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.138 [2024-10-14 14:42:19.821976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.138 [2024-10-14 14:42:19.821984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.138 [2024-10-14 14:42:19.821992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.138 10220.00 IOPS, 39.92 MiB/s [2024-10-14T12:42:19.865Z] [2024-10-14 14:42:19.827197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.138 [2024-10-14 14:42:19.834722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.138 [2024-10-14 14:42:19.835393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.138 [2024-10-14 14:42:19.835431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.138 [2024-10-14 14:42:19.835441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.138 [2024-10-14 14:42:19.835679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.138 [2024-10-14 14:42:19.835902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.138 [2024-10-14 14:42:19.835910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.138 [2024-10-14 14:42:19.835918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.138 [2024-10-14 14:42:19.839473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.138 [2024-10-14 14:42:19.848668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.138 [2024-10-14 14:42:19.849338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.138 [2024-10-14 14:42:19.849375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.138 [2024-10-14 14:42:19.849386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.138 [2024-10-14 14:42:19.849624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.138 [2024-10-14 14:42:19.849847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.138 [2024-10-14 14:42:19.849855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.138 [2024-10-14 14:42:19.849863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.138 [2024-10-14 14:42:19.853418] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.400 [2024-10-14 14:42:19.862619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.400 [2024-10-14 14:42:19.863299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.400 [2024-10-14 14:42:19.863336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.400 [2024-10-14 14:42:19.863347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.400 [2024-10-14 14:42:19.863586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.400 [2024-10-14 14:42:19.863808] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.400 [2024-10-14 14:42:19.863817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.400 [2024-10-14 14:42:19.863825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.400 [2024-10-14 14:42:19.867386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.400 [2024-10-14 14:42:19.876573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.400 [2024-10-14 14:42:19.877227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.400 [2024-10-14 14:42:19.877264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.400 [2024-10-14 14:42:19.877276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.400 [2024-10-14 14:42:19.877516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.400 [2024-10-14 14:42:19.877738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.400 [2024-10-14 14:42:19.877746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.400 [2024-10-14 14:42:19.877754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.400 [2024-10-14 14:42:19.881316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.400 [2024-10-14 14:42:19.890492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.400 [2024-10-14 14:42:19.891160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.400 [2024-10-14 14:42:19.891197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.400 [2024-10-14 14:42:19.891208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.401 [2024-10-14 14:42:19.891446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.401 [2024-10-14 14:42:19.891669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.401 [2024-10-14 14:42:19.891677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.401 [2024-10-14 14:42:19.891684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.401 [2024-10-14 14:42:19.895238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.401 [2024-10-14 14:42:19.904426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.401 [2024-10-14 14:42:19.905101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.401 [2024-10-14 14:42:19.905139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.401 [2024-10-14 14:42:19.905150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.401 [2024-10-14 14:42:19.905388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.401 [2024-10-14 14:42:19.905610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.401 [2024-10-14 14:42:19.905619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.401 [2024-10-14 14:42:19.905627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.401 [2024-10-14 14:42:19.909179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.401 [2024-10-14 14:42:19.918366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.401 [2024-10-14 14:42:19.918930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.401 [2024-10-14 14:42:19.918967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.401 [2024-10-14 14:42:19.918983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.401 [2024-10-14 14:42:19.919233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.401 [2024-10-14 14:42:19.919457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.401 [2024-10-14 14:42:19.919466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.401 [2024-10-14 14:42:19.919473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.401 [2024-10-14 14:42:19.923017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.401 [2024-10-14 14:42:19.932209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.401 [2024-10-14 14:42:19.932881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.401 [2024-10-14 14:42:19.932918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.401 [2024-10-14 14:42:19.932929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.401 [2024-10-14 14:42:19.933177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.401 [2024-10-14 14:42:19.933405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.401 [2024-10-14 14:42:19.933415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.401 [2024-10-14 14:42:19.933423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.401 [2024-10-14 14:42:19.936975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.401 [2024-10-14 14:42:19.946166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.401 [2024-10-14 14:42:19.946846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.401 [2024-10-14 14:42:19.946884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.401 [2024-10-14 14:42:19.946896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.401 [2024-10-14 14:42:19.947146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.401 [2024-10-14 14:42:19.947369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.401 [2024-10-14 14:42:19.947378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.401 [2024-10-14 14:42:19.947386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.401 [2024-10-14 14:42:19.950930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.401 [2024-10-14 14:42:19.960118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.401 [2024-10-14 14:42:19.960791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.401 [2024-10-14 14:42:19.960828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.401 [2024-10-14 14:42:19.960839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.401 [2024-10-14 14:42:19.961086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.401 [2024-10-14 14:42:19.961309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.401 [2024-10-14 14:42:19.961323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.401 [2024-10-14 14:42:19.961331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.401 [2024-10-14 14:42:19.964875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.401 [2024-10-14 14:42:19.974074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.401 [2024-10-14 14:42:19.974743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.401 [2024-10-14 14:42:19.974781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.401 [2024-10-14 14:42:19.974792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.401 [2024-10-14 14:42:19.975031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.401 [2024-10-14 14:42:19.975264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.401 [2024-10-14 14:42:19.975274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.401 [2024-10-14 14:42:19.975281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.401 [2024-10-14 14:42:19.978833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.401 [2024-10-14 14:42:19.988016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.401 [2024-10-14 14:42:19.988468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.401 [2024-10-14 14:42:19.988488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.401 [2024-10-14 14:42:19.988496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.401 [2024-10-14 14:42:19.988716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.401 [2024-10-14 14:42:19.988935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.401 [2024-10-14 14:42:19.988943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.401 [2024-10-14 14:42:19.988950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.401 [2024-10-14 14:42:19.992504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.401 [2024-10-14 14:42:20.001901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.401 [2024-10-14 14:42:20.002902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.401 [2024-10-14 14:42:20.002924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.401 [2024-10-14 14:42:20.002932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.401 [2024-10-14 14:42:20.003160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.401 [2024-10-14 14:42:20.003379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.401 [2024-10-14 14:42:20.003388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.401 [2024-10-14 14:42:20.003395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.401 [2024-10-14 14:42:20.006937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.401 [2024-10-14 14:42:20.015728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.401 [2024-10-14 14:42:20.016184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.401 [2024-10-14 14:42:20.016201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.401 [2024-10-14 14:42:20.016208] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.401 [2024-10-14 14:42:20.016429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.401 [2024-10-14 14:42:20.016691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.401 [2024-10-14 14:42:20.016706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.401 [2024-10-14 14:42:20.016719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.401 [2024-10-14 14:42:20.020399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.401 [2024-10-14 14:42:20.029593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.401 [2024-10-14 14:42:20.030193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.401 [2024-10-14 14:42:20.030232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.401 [2024-10-14 14:42:20.030244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.401 [2024-10-14 14:42:20.030483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.401 [2024-10-14 14:42:20.030705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.402 [2024-10-14 14:42:20.030714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.402 [2024-10-14 14:42:20.030722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.402 [2024-10-14 14:42:20.034282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.402 [2024-10-14 14:42:20.043474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.402 [2024-10-14 14:42:20.044100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.402 [2024-10-14 14:42:20.044145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.402 [2024-10-14 14:42:20.044167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.402 [2024-10-14 14:42:20.044493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.402 [2024-10-14 14:42:20.044813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.402 [2024-10-14 14:42:20.044826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.402 [2024-10-14 14:42:20.044837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.402 [2024-10-14 14:42:20.048692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.402 [2024-10-14 14:42:20.057289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.402 [2024-10-14 14:42:20.057841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.402 [2024-10-14 14:42:20.057861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.402 [2024-10-14 14:42:20.057874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.402 [2024-10-14 14:42:20.058102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.402 [2024-10-14 14:42:20.058322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.402 [2024-10-14 14:42:20.058331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.402 [2024-10-14 14:42:20.058339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.402 [2024-10-14 14:42:20.061886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.402 [2024-10-14 14:42:20.071087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.402 [2024-10-14 14:42:20.071712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.402 [2024-10-14 14:42:20.071749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.402 [2024-10-14 14:42:20.071760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.402 [2024-10-14 14:42:20.071998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.402 [2024-10-14 14:42:20.072229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.402 [2024-10-14 14:42:20.072238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.402 [2024-10-14 14:42:20.072247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.402 [2024-10-14 14:42:20.075793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.402 [2024-10-14 14:42:20.084994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.402 [2024-10-14 14:42:20.085659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.402 [2024-10-14 14:42:20.085697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.402 [2024-10-14 14:42:20.085708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.402 [2024-10-14 14:42:20.085946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.402 [2024-10-14 14:42:20.086175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.402 [2024-10-14 14:42:20.086185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.402 [2024-10-14 14:42:20.086192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.402 [2024-10-14 14:42:20.089737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.402 [2024-10-14 14:42:20.099145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.402 [2024-10-14 14:42:20.099736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.402 [2024-10-14 14:42:20.099755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.402 [2024-10-14 14:42:20.099764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.402 [2024-10-14 14:42:20.099984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.402 [2024-10-14 14:42:20.100210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.402 [2024-10-14 14:42:20.100224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.402 [2024-10-14 14:42:20.100232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.402 [2024-10-14 14:42:20.103772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.402 [2024-10-14 14:42:20.112963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.402 [2024-10-14 14:42:20.113627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.402 [2024-10-14 14:42:20.113665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.402 [2024-10-14 14:42:20.113676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.402 [2024-10-14 14:42:20.113914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.402 [2024-10-14 14:42:20.114146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.402 [2024-10-14 14:42:20.114155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.402 [2024-10-14 14:42:20.114163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.402 [2024-10-14 14:42:20.117704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.402 [2024-10-14 14:42:20.126893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.402 [2024-10-14 14:42:20.127540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.402 [2024-10-14 14:42:20.127578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.402 [2024-10-14 14:42:20.127588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.402 [2024-10-14 14:42:20.127826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.402 [2024-10-14 14:42:20.128048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.402 [2024-10-14 14:42:20.128057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.402 [2024-10-14 14:42:20.128075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.664 [2024-10-14 14:42:20.131622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.664 [2024-10-14 14:42:20.140826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.664 [2024-10-14 14:42:20.141277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.664 [2024-10-14 14:42:20.141296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.664 [2024-10-14 14:42:20.141304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.664 [2024-10-14 14:42:20.141524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.664 [2024-10-14 14:42:20.141742] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.664 [2024-10-14 14:42:20.141750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.664 [2024-10-14 14:42:20.141758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.664 [2024-10-14 14:42:20.145309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.664 [2024-10-14 14:42:20.154716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.664 [2024-10-14 14:42:20.155350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.664 [2024-10-14 14:42:20.155387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.664 [2024-10-14 14:42:20.155398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.664 [2024-10-14 14:42:20.155636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.664 [2024-10-14 14:42:20.155858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.664 [2024-10-14 14:42:20.155867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.664 [2024-10-14 14:42:20.155874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.664 [2024-10-14 14:42:20.159433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.664 [2024-10-14 14:42:20.168624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.664 [2024-10-14 14:42:20.169320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.664 [2024-10-14 14:42:20.169358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.664 [2024-10-14 14:42:20.169368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.664 [2024-10-14 14:42:20.169607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.664 [2024-10-14 14:42:20.169829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.664 [2024-10-14 14:42:20.169837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.664 [2024-10-14 14:42:20.169845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.664 [2024-10-14 14:42:20.173399] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.664 [2024-10-14 14:42:20.182603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.664 [2024-10-14 14:42:20.183276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.664 [2024-10-14 14:42:20.183313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.664 [2024-10-14 14:42:20.183324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.664 [2024-10-14 14:42:20.183562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.664 [2024-10-14 14:42:20.183784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.664 [2024-10-14 14:42:20.183793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.664 [2024-10-14 14:42:20.183800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.664 [2024-10-14 14:42:20.187355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.664 [2024-10-14 14:42:20.196541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.664 [2024-10-14 14:42:20.197237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.664 [2024-10-14 14:42:20.197275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.664 [2024-10-14 14:42:20.197285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.665 [2024-10-14 14:42:20.197528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.665 [2024-10-14 14:42:20.197751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.665 [2024-10-14 14:42:20.197760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.665 [2024-10-14 14:42:20.197767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.665 [2024-10-14 14:42:20.201339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.665 [2024-10-14 14:42:20.210536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.665 [2024-10-14 14:42:20.211187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.665 [2024-10-14 14:42:20.211224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.665 [2024-10-14 14:42:20.211235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.665 [2024-10-14 14:42:20.211473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.665 [2024-10-14 14:42:20.211695] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.665 [2024-10-14 14:42:20.211704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.665 [2024-10-14 14:42:20.211712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.665 [2024-10-14 14:42:20.215272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.665 [2024-10-14 14:42:20.224469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.665 [2024-10-14 14:42:20.225113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.665 [2024-10-14 14:42:20.225150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.665 [2024-10-14 14:42:20.225163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.665 [2024-10-14 14:42:20.225402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.665 [2024-10-14 14:42:20.225624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.665 [2024-10-14 14:42:20.225633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.665 [2024-10-14 14:42:20.225641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.665 [2024-10-14 14:42:20.229200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.665 [2024-10-14 14:42:20.238400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.665 [2024-10-14 14:42:20.239100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.665 [2024-10-14 14:42:20.239137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.665 [2024-10-14 14:42:20.239148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.665 [2024-10-14 14:42:20.239386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.665 [2024-10-14 14:42:20.239608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.665 [2024-10-14 14:42:20.239617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.665 [2024-10-14 14:42:20.239629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.665 [2024-10-14 14:42:20.243186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.665 [2024-10-14 14:42:20.252399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.665 [2024-10-14 14:42:20.253077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.665 [2024-10-14 14:42:20.253114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.665 [2024-10-14 14:42:20.253127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.665 [2024-10-14 14:42:20.253368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.665 [2024-10-14 14:42:20.253590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.665 [2024-10-14 14:42:20.253598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.665 [2024-10-14 14:42:20.253606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.665 [2024-10-14 14:42:20.257159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.665 [2024-10-14 14:42:20.266355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.665 [2024-10-14 14:42:20.266997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.665 [2024-10-14 14:42:20.267034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.665 [2024-10-14 14:42:20.267046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.665 [2024-10-14 14:42:20.267296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.665 [2024-10-14 14:42:20.267519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.665 [2024-10-14 14:42:20.267528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.665 [2024-10-14 14:42:20.267536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.665 [2024-10-14 14:42:20.271084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.665 [2024-10-14 14:42:20.280282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.665 [2024-10-14 14:42:20.280907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.665 [2024-10-14 14:42:20.280944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.665 [2024-10-14 14:42:20.280955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.665 [2024-10-14 14:42:20.281203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.665 [2024-10-14 14:42:20.281426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.665 [2024-10-14 14:42:20.281434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.665 [2024-10-14 14:42:20.281442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.665 [2024-10-14 14:42:20.284988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.665 [2024-10-14 14:42:20.294176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.665 [2024-10-14 14:42:20.294722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.665 [2024-10-14 14:42:20.294745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.665 [2024-10-14 14:42:20.294753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.665 [2024-10-14 14:42:20.294973] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.665 [2024-10-14 14:42:20.295201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.665 [2024-10-14 14:42:20.295210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.665 [2024-10-14 14:42:20.295217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.665 [2024-10-14 14:42:20.298759] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.665 [2024-10-14 14:42:20.308164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.665 [2024-10-14 14:42:20.308693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.665 [2024-10-14 14:42:20.308710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.665 [2024-10-14 14:42:20.308717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.665 [2024-10-14 14:42:20.308936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.665 [2024-10-14 14:42:20.309161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.665 [2024-10-14 14:42:20.309170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.665 [2024-10-14 14:42:20.309177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.665 [2024-10-14 14:42:20.312716] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.665 [2024-10-14 14:42:20.322112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.665 [2024-10-14 14:42:20.322636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.665 [2024-10-14 14:42:20.322651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.665 [2024-10-14 14:42:20.322659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.665 [2024-10-14 14:42:20.322877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.665 [2024-10-14 14:42:20.323101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.665 [2024-10-14 14:42:20.323109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.665 [2024-10-14 14:42:20.323117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.665 [2024-10-14 14:42:20.326654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.665 [2024-10-14 14:42:20.336055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.665 [2024-10-14 14:42:20.336584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.665 [2024-10-14 14:42:20.336621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.665 [2024-10-14 14:42:20.336631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.665 [2024-10-14 14:42:20.336869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.665 [2024-10-14 14:42:20.337108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.665 [2024-10-14 14:42:20.337119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.665 [2024-10-14 14:42:20.337126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.666 [2024-10-14 14:42:20.340677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.666 [2024-10-14 14:42:20.349891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.666 [2024-10-14 14:42:20.350554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.666 [2024-10-14 14:42:20.350591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.666 [2024-10-14 14:42:20.350601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.666 [2024-10-14 14:42:20.350839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.666 [2024-10-14 14:42:20.351061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.666 [2024-10-14 14:42:20.351083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.666 [2024-10-14 14:42:20.351092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.666 [2024-10-14 14:42:20.354643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.666 [2024-10-14 14:42:20.363849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.666 [2024-10-14 14:42:20.364388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.666 [2024-10-14 14:42:20.364408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.666 [2024-10-14 14:42:20.364416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.666 [2024-10-14 14:42:20.364636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.666 [2024-10-14 14:42:20.364854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.666 [2024-10-14 14:42:20.364864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.666 [2024-10-14 14:42:20.364871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.666 [2024-10-14 14:42:20.368421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.666 [2024-10-14 14:42:20.377820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.666 [2024-10-14 14:42:20.378366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.666 [2024-10-14 14:42:20.378383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.666 [2024-10-14 14:42:20.378391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.666 [2024-10-14 14:42:20.378610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.666 [2024-10-14 14:42:20.378828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.666 [2024-10-14 14:42:20.378837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.666 [2024-10-14 14:42:20.378844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.666 [2024-10-14 14:42:20.382401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.666 [2024-10-14 14:42:20.391804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.666 [2024-10-14 14:42:20.392306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.666 [2024-10-14 14:42:20.392323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.666 [2024-10-14 14:42:20.392331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.666 [2024-10-14 14:42:20.392549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.666 [2024-10-14 14:42:20.392767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.666 [2024-10-14 14:42:20.392776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.666 [2024-10-14 14:42:20.392783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.928 [2024-10-14 14:42:20.396329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.928 [2024-10-14 14:42:20.405735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.928 [2024-10-14 14:42:20.406276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.928 [2024-10-14 14:42:20.406293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.928 [2024-10-14 14:42:20.406300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.928 [2024-10-14 14:42:20.406519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.928 [2024-10-14 14:42:20.406737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.928 [2024-10-14 14:42:20.406745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.928 [2024-10-14 14:42:20.406752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.928 [2024-10-14 14:42:20.410310] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.928 [2024-10-14 14:42:20.419716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.928 [2024-10-14 14:42:20.420285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.928 [2024-10-14 14:42:20.420302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.928 [2024-10-14 14:42:20.420310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.928 [2024-10-14 14:42:20.420528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.928 [2024-10-14 14:42:20.420747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.928 [2024-10-14 14:42:20.420756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.928 [2024-10-14 14:42:20.420763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.928 [2024-10-14 14:42:20.424307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.928 [2024-10-14 14:42:20.433504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.928 [2024-10-14 14:42:20.434149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.928 [2024-10-14 14:42:20.434187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.928 [2024-10-14 14:42:20.434204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.928 [2024-10-14 14:42:20.434445] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.928 [2024-10-14 14:42:20.434667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.928 [2024-10-14 14:42:20.434677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.928 [2024-10-14 14:42:20.434684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.928 [2024-10-14 14:42:20.438236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.928 [2024-10-14 14:42:20.447456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.928 [2024-10-14 14:42:20.448113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.928 [2024-10-14 14:42:20.448151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.928 [2024-10-14 14:42:20.448163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.928 [2024-10-14 14:42:20.448403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.928 [2024-10-14 14:42:20.448636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.928 [2024-10-14 14:42:20.448646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.928 [2024-10-14 14:42:20.448654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.928 [2024-10-14 14:42:20.452205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.928 [2024-10-14 14:42:20.461415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.928 [2024-10-14 14:42:20.462042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.928 [2024-10-14 14:42:20.462088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.928 [2024-10-14 14:42:20.462100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.928 [2024-10-14 14:42:20.462342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.928 [2024-10-14 14:42:20.462564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.928 [2024-10-14 14:42:20.462573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.928 [2024-10-14 14:42:20.462580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.929 [2024-10-14 14:42:20.466137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.929 [2024-10-14 14:42:20.475342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.929 [2024-10-14 14:42:20.475881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.929 [2024-10-14 14:42:20.475900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.929 [2024-10-14 14:42:20.475908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.929 [2024-10-14 14:42:20.476134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.929 [2024-10-14 14:42:20.476354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.929 [2024-10-14 14:42:20.476371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.929 [2024-10-14 14:42:20.476378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.929 [2024-10-14 14:42:20.479937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.929 [2024-10-14 14:42:20.489147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.929 [2024-10-14 14:42:20.489802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.929 [2024-10-14 14:42:20.489839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.929 [2024-10-14 14:42:20.489849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.929 [2024-10-14 14:42:20.490097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.929 [2024-10-14 14:42:20.490321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.929 [2024-10-14 14:42:20.490329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.929 [2024-10-14 14:42:20.490337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.929 [2024-10-14 14:42:20.493889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.929 [2024-10-14 14:42:20.503098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.929 [2024-10-14 14:42:20.503764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.929 [2024-10-14 14:42:20.503802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.929 [2024-10-14 14:42:20.503813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.929 [2024-10-14 14:42:20.504051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.929 [2024-10-14 14:42:20.504282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.929 [2024-10-14 14:42:20.504292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.929 [2024-10-14 14:42:20.504299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.929 [2024-10-14 14:42:20.507845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.929 [2024-10-14 14:42:20.517048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.929 [2024-10-14 14:42:20.517438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.929 [2024-10-14 14:42:20.517457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.929 [2024-10-14 14:42:20.517465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.929 [2024-10-14 14:42:20.517685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.929 [2024-10-14 14:42:20.517903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.929 [2024-10-14 14:42:20.517920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.929 [2024-10-14 14:42:20.517927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.929 [2024-10-14 14:42:20.521479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.929 [2024-10-14 14:42:20.530893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.929 [2024-10-14 14:42:20.531430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.929 [2024-10-14 14:42:20.531446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.929 [2024-10-14 14:42:20.531454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.929 [2024-10-14 14:42:20.531672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.929 [2024-10-14 14:42:20.531890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.929 [2024-10-14 14:42:20.531900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.929 [2024-10-14 14:42:20.531907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.929 [2024-10-14 14:42:20.535459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.929 [2024-10-14 14:42:20.544885] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.929 [2024-10-14 14:42:20.545453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.929 [2024-10-14 14:42:20.545470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.929 [2024-10-14 14:42:20.545477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.929 [2024-10-14 14:42:20.545696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.929 [2024-10-14 14:42:20.545914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.929 [2024-10-14 14:42:20.545924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.929 [2024-10-14 14:42:20.545932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.929 [2024-10-14 14:42:20.549494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.929 [2024-10-14 14:42:20.558702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.929 [2024-10-14 14:42:20.559337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.929 [2024-10-14 14:42:20.559375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.929 [2024-10-14 14:42:20.559386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.929 [2024-10-14 14:42:20.559623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.929 [2024-10-14 14:42:20.559846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.929 [2024-10-14 14:42:20.559855] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.929 [2024-10-14 14:42:20.559864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.929 [2024-10-14 14:42:20.563417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.929 [2024-10-14 14:42:20.572517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.929 [2024-10-14 14:42:20.572990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.929 [2024-10-14 14:42:20.573010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.929 [2024-10-14 14:42:20.573018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.929 [2024-10-14 14:42:20.573249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.929 [2024-10-14 14:42:20.573468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.929 [2024-10-14 14:42:20.573476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.929 [2024-10-14 14:42:20.573483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.929 [2024-10-14 14:42:20.577026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.929 [2024-10-14 14:42:20.586443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.929 [2024-10-14 14:42:20.587018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.929 [2024-10-14 14:42:20.587035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.929 [2024-10-14 14:42:20.587043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.929 [2024-10-14 14:42:20.587267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.929 [2024-10-14 14:42:20.587486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.929 [2024-10-14 14:42:20.587494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.929 [2024-10-14 14:42:20.587501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.929 [2024-10-14 14:42:20.591044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.929 [2024-10-14 14:42:20.600246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.929 [2024-10-14 14:42:20.600774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.929 [2024-10-14 14:42:20.600791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.929 [2024-10-14 14:42:20.600798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.929 [2024-10-14 14:42:20.601016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.930 [2024-10-14 14:42:20.601243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.930 [2024-10-14 14:42:20.601252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.930 [2024-10-14 14:42:20.601259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.930 [2024-10-14 14:42:20.604806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.930 [2024-10-14 14:42:20.614216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.930 [2024-10-14 14:42:20.614742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.930 [2024-10-14 14:42:20.614758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.930 [2024-10-14 14:42:20.614765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.930 [2024-10-14 14:42:20.614983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.930 [2024-10-14 14:42:20.615217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.930 [2024-10-14 14:42:20.615229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.930 [2024-10-14 14:42:20.615240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.930 [2024-10-14 14:42:20.618790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.930 [2024-10-14 14:42:20.628211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.930 [2024-10-14 14:42:20.628655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.930 [2024-10-14 14:42:20.628670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.930 [2024-10-14 14:42:20.628678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.930 [2024-10-14 14:42:20.628896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.930 [2024-10-14 14:42:20.629122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.930 [2024-10-14 14:42:20.629130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.930 [2024-10-14 14:42:20.629138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.930 [2024-10-14 14:42:20.632680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.930 [2024-10-14 14:42:20.642097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.930 [2024-10-14 14:42:20.642715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.930 [2024-10-14 14:42:20.642753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.930 [2024-10-14 14:42:20.642764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.930 [2024-10-14 14:42:20.643003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:39.930 [2024-10-14 14:42:20.643234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:39.930 [2024-10-14 14:42:20.643244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:39.930 [2024-10-14 14:42:20.643251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:39.930 [2024-10-14 14:42:20.646796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:39.930 [2024-10-14 14:42:20.655998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:39.930 [2024-10-14 14:42:20.656645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.930 [2024-10-14 14:42:20.656683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:39.930 [2024-10-14 14:42:20.656695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:39.930 [2024-10-14 14:42:20.656935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.192 [2024-10-14 14:42:20.657166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.192 [2024-10-14 14:42:20.657176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.192 [2024-10-14 14:42:20.657185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.192 [2024-10-14 14:42:20.660727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.192 [2024-10-14 14:42:20.669920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.192 [2024-10-14 14:42:20.670574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.192 [2024-10-14 14:42:20.670612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.192 [2024-10-14 14:42:20.670624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.192 [2024-10-14 14:42:20.670863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.192 [2024-10-14 14:42:20.671091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.192 [2024-10-14 14:42:20.671101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.192 [2024-10-14 14:42:20.671109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.192 [2024-10-14 14:42:20.674651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.192 [2024-10-14 14:42:20.683864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.192 [2024-10-14 14:42:20.684516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.192 [2024-10-14 14:42:20.684553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.192 [2024-10-14 14:42:20.684564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.192 [2024-10-14 14:42:20.684802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.192 [2024-10-14 14:42:20.685024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.192 [2024-10-14 14:42:20.685033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.192 [2024-10-14 14:42:20.685041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.192 [2024-10-14 14:42:20.688603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.192 [2024-10-14 14:42:20.697802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.192 [2024-10-14 14:42:20.698380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.192 [2024-10-14 14:42:20.698400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.192 [2024-10-14 14:42:20.698408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.192 [2024-10-14 14:42:20.698627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.192 [2024-10-14 14:42:20.698845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.192 [2024-10-14 14:42:20.698854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.192 [2024-10-14 14:42:20.698861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.192 [2024-10-14 14:42:20.702412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.192 [2024-10-14 14:42:20.711656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.192 [2024-10-14 14:42:20.712146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.192 [2024-10-14 14:42:20.712163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.192 [2024-10-14 14:42:20.712171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.192 [2024-10-14 14:42:20.712395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.192 [2024-10-14 14:42:20.712613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.192 [2024-10-14 14:42:20.712621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.192 [2024-10-14 14:42:20.712629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.192 [2024-10-14 14:42:20.716181] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.192 [2024-10-14 14:42:20.725591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.192 [2024-10-14 14:42:20.726137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.192 [2024-10-14 14:42:20.726176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.192 [2024-10-14 14:42:20.726188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.192 [2024-10-14 14:42:20.726429] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.192 [2024-10-14 14:42:20.726651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.192 [2024-10-14 14:42:20.726659] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.192 [2024-10-14 14:42:20.726667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.192 [2024-10-14 14:42:20.730221] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.192 [2024-10-14 14:42:20.739421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.192 [2024-10-14 14:42:20.740097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.192 [2024-10-14 14:42:20.740134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.192 [2024-10-14 14:42:20.740147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.192 [2024-10-14 14:42:20.740388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.192 [2024-10-14 14:42:20.740610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.192 [2024-10-14 14:42:20.740619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.192 [2024-10-14 14:42:20.740627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.192 [2024-10-14 14:42:20.744183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.192 [2024-10-14 14:42:20.753386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.192 [2024-10-14 14:42:20.753954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.192 [2024-10-14 14:42:20.753973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.192 [2024-10-14 14:42:20.753981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.192 [2024-10-14 14:42:20.754207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.192 [2024-10-14 14:42:20.754426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.192 [2024-10-14 14:42:20.754435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.192 [2024-10-14 14:42:20.754447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.192 [2024-10-14 14:42:20.757989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.192 [2024-10-14 14:42:20.767178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.192 [2024-10-14 14:42:20.767794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.192 [2024-10-14 14:42:20.767831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.192 [2024-10-14 14:42:20.767842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.192 [2024-10-14 14:42:20.768091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.192 [2024-10-14 14:42:20.768314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.192 [2024-10-14 14:42:20.768323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.192 [2024-10-14 14:42:20.768331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.192 [2024-10-14 14:42:20.771877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.193 [2024-10-14 14:42:20.781078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.193 [2024-10-14 14:42:20.781659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.193 [2024-10-14 14:42:20.781678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.193 [2024-10-14 14:42:20.781686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.193 [2024-10-14 14:42:20.781905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.193 [2024-10-14 14:42:20.782131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.193 [2024-10-14 14:42:20.782141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.193 [2024-10-14 14:42:20.782148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.193 [2024-10-14 14:42:20.785688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.193 [2024-10-14 14:42:20.794873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.193 [2024-10-14 14:42:20.795550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.193 [2024-10-14 14:42:20.795587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.193 [2024-10-14 14:42:20.795598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.193 [2024-10-14 14:42:20.795836] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.193 [2024-10-14 14:42:20.796058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.193 [2024-10-14 14:42:20.796078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.193 [2024-10-14 14:42:20.796086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.193 [2024-10-14 14:42:20.799628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.193 [2024-10-14 14:42:20.808821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.193 [2024-10-14 14:42:20.809398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.193 [2024-10-14 14:42:20.809440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.193 [2024-10-14 14:42:20.809452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.193 [2024-10-14 14:42:20.809691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.193 [2024-10-14 14:42:20.809913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.193 [2024-10-14 14:42:20.809921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.193 [2024-10-14 14:42:20.809930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.193 [2024-10-14 14:42:20.813480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.193 [2024-10-14 14:42:20.822675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.193 [2024-10-14 14:42:20.823304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.193 [2024-10-14 14:42:20.823342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.193 [2024-10-14 14:42:20.823353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.193 [2024-10-14 14:42:20.823591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.193 [2024-10-14 14:42:20.823820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.193 [2024-10-14 14:42:20.823829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.193 [2024-10-14 14:42:20.823837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.193 7665.00 IOPS, 29.94 MiB/s [2024-10-14T12:42:20.920Z] [2024-10-14 14:42:20.829043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.193 [2024-10-14 14:42:20.836575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.193 [2024-10-14 14:42:20.837207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.193 [2024-10-14 14:42:20.837245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.193 [2024-10-14 14:42:20.837257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.193 [2024-10-14 14:42:20.837497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.193 [2024-10-14 14:42:20.837719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.193 [2024-10-14 14:42:20.837728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.193 [2024-10-14 14:42:20.837736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.193 [2024-10-14 14:42:20.841285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.193 [2024-10-14 14:42:20.850483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.193 [2024-10-14 14:42:20.851046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.193 [2024-10-14 14:42:20.851070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.193 [2024-10-14 14:42:20.851079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.193 [2024-10-14 14:42:20.851298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.193 [2024-10-14 14:42:20.851522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.193 [2024-10-14 14:42:20.851530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.193 [2024-10-14 14:42:20.851538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.193 [2024-10-14 14:42:20.855081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.193 [2024-10-14 14:42:20.864268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.193 [2024-10-14 14:42:20.864924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.193 [2024-10-14 14:42:20.864962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.193 [2024-10-14 14:42:20.864974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.193 [2024-10-14 14:42:20.865223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.193 [2024-10-14 14:42:20.865446] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.193 [2024-10-14 14:42:20.865455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.193 [2024-10-14 14:42:20.865463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.193 [2024-10-14 14:42:20.869004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.193 [2024-10-14 14:42:20.878206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.193 [2024-10-14 14:42:20.878815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.193 [2024-10-14 14:42:20.878852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.193 [2024-10-14 14:42:20.878863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.193 [2024-10-14 14:42:20.879109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.193 [2024-10-14 14:42:20.879332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.193 [2024-10-14 14:42:20.879342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.193 [2024-10-14 14:42:20.879350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.193 [2024-10-14 14:42:20.882895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.193 [2024-10-14 14:42:20.892087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.193 [2024-10-14 14:42:20.892698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.193 [2024-10-14 14:42:20.892736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.193 [2024-10-14 14:42:20.892747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.193 [2024-10-14 14:42:20.892985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.193 [2024-10-14 14:42:20.893215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.193 [2024-10-14 14:42:20.893225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.193 [2024-10-14 14:42:20.893233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.193 [2024-10-14 14:42:20.896779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.193 [2024-10-14 14:42:20.905978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.193 [2024-10-14 14:42:20.906603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.193 [2024-10-14 14:42:20.906640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.193 [2024-10-14 14:42:20.906651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.193 [2024-10-14 14:42:20.906889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.193 [2024-10-14 14:42:20.907120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.193 [2024-10-14 14:42:20.907129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.193 [2024-10-14 14:42:20.907137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.193 [2024-10-14 14:42:20.910681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.193 [2024-10-14 14:42:20.919873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.193 [2024-10-14 14:42:20.920540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.193 [2024-10-14 14:42:20.920578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.193 [2024-10-14 14:42:20.920589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.456 [2024-10-14 14:42:20.920827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.456 [2024-10-14 14:42:20.921052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.456 [2024-10-14 14:42:20.921061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.456 [2024-10-14 14:42:20.921078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.456 [2024-10-14 14:42:20.924622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.456 [2024-10-14 14:42:20.933808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.456 [2024-10-14 14:42:20.934502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.456 [2024-10-14 14:42:20.934539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.456 [2024-10-14 14:42:20.934551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.456 [2024-10-14 14:42:20.934790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.456 [2024-10-14 14:42:20.935013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.456 [2024-10-14 14:42:20.935022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.456 [2024-10-14 14:42:20.935030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.456 [2024-10-14 14:42:20.938584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.456 [2024-10-14 14:42:20.947776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.456 [2024-10-14 14:42:20.948339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.456 [2024-10-14 14:42:20.948358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.456 [2024-10-14 14:42:20.948370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.456 [2024-10-14 14:42:20.948591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.456 [2024-10-14 14:42:20.948809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.456 [2024-10-14 14:42:20.948816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.456 [2024-10-14 14:42:20.948824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.456 [2024-10-14 14:42:20.952381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.456 [2024-10-14 14:42:20.961570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.456 [2024-10-14 14:42:20.962312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.456 [2024-10-14 14:42:20.962349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.456 [2024-10-14 14:42:20.962360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.456 [2024-10-14 14:42:20.962598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.456 [2024-10-14 14:42:20.962821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.456 [2024-10-14 14:42:20.962829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.456 [2024-10-14 14:42:20.962837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.456 [2024-10-14 14:42:20.966389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.456 [2024-10-14 14:42:20.975368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.456 [2024-10-14 14:42:20.976071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.456 [2024-10-14 14:42:20.976109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.456 [2024-10-14 14:42:20.976120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.456 [2024-10-14 14:42:20.976358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.456 [2024-10-14 14:42:20.976581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.456 [2024-10-14 14:42:20.976589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.456 [2024-10-14 14:42:20.976597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.456 [2024-10-14 14:42:20.980157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.456 [2024-10-14 14:42:20.989350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.456 [2024-10-14 14:42:20.990056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.456 [2024-10-14 14:42:20.990101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.456 [2024-10-14 14:42:20.990114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.456 [2024-10-14 14:42:20.990355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.456 [2024-10-14 14:42:20.990577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.456 [2024-10-14 14:42:20.990590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.456 [2024-10-14 14:42:20.990598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.456 [2024-10-14 14:42:20.994149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.456 [2024-10-14 14:42:21.003338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.456 [2024-10-14 14:42:21.003974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.456 [2024-10-14 14:42:21.004011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.456 [2024-10-14 14:42:21.004023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.456 [2024-10-14 14:42:21.004273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.456 [2024-10-14 14:42:21.004496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.456 [2024-10-14 14:42:21.004505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.456 [2024-10-14 14:42:21.004513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.456 [2024-10-14 14:42:21.008057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.456 [2024-10-14 14:42:21.017247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.456 [2024-10-14 14:42:21.017694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.456 [2024-10-14 14:42:21.017712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.456 [2024-10-14 14:42:21.017720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.457 [2024-10-14 14:42:21.017939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.457 [2024-10-14 14:42:21.018163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.457 [2024-10-14 14:42:21.018173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.457 [2024-10-14 14:42:21.018180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.457 [2024-10-14 14:42:21.021717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.457 [2024-10-14 14:42:21.031126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.457 [2024-10-14 14:42:21.031769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.457 [2024-10-14 14:42:21.031807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.457 [2024-10-14 14:42:21.031819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.457 [2024-10-14 14:42:21.032061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.457 [2024-10-14 14:42:21.032292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.457 [2024-10-14 14:42:21.032301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.457 [2024-10-14 14:42:21.032309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.457 [2024-10-14 14:42:21.035868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.457 [2024-10-14 14:42:21.045073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.457 [2024-10-14 14:42:21.045649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.457 [2024-10-14 14:42:21.045686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.457 [2024-10-14 14:42:21.045698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.457 [2024-10-14 14:42:21.045937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.457 [2024-10-14 14:42:21.046167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.457 [2024-10-14 14:42:21.046177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.457 [2024-10-14 14:42:21.046185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.457 [2024-10-14 14:42:21.049730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.457 [2024-10-14 14:42:21.058930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.457 [2024-10-14 14:42:21.059501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.457 [2024-10-14 14:42:21.059520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.457 [2024-10-14 14:42:21.059528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.457 [2024-10-14 14:42:21.059748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.457 [2024-10-14 14:42:21.059967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.457 [2024-10-14 14:42:21.059975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.457 [2024-10-14 14:42:21.059983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.457 [2024-10-14 14:42:21.063528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.457 [2024-10-14 14:42:21.072916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.457 [2024-10-14 14:42:21.073473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.457 [2024-10-14 14:42:21.073510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.457 [2024-10-14 14:42:21.073522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.457 [2024-10-14 14:42:21.073762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.457 [2024-10-14 14:42:21.073984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.457 [2024-10-14 14:42:21.073993] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.457 [2024-10-14 14:42:21.074001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.457 [2024-10-14 14:42:21.077555] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.457 [2024-10-14 14:42:21.086767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.457 [2024-10-14 14:42:21.087227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.457 [2024-10-14 14:42:21.087266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.457 [2024-10-14 14:42:21.087278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.457 [2024-10-14 14:42:21.087523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.457 [2024-10-14 14:42:21.087745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.457 [2024-10-14 14:42:21.087754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.457 [2024-10-14 14:42:21.087762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.457 [2024-10-14 14:42:21.091319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.457 [2024-10-14 14:42:21.100721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.457 [2024-10-14 14:42:21.101359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.457 [2024-10-14 14:42:21.101397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.457 [2024-10-14 14:42:21.101408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.457 [2024-10-14 14:42:21.101646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.457 [2024-10-14 14:42:21.101869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.457 [2024-10-14 14:42:21.101878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.457 [2024-10-14 14:42:21.101885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.457 [2024-10-14 14:42:21.105440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.457 [2024-10-14 14:42:21.114634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.457 [2024-10-14 14:42:21.115187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.457 [2024-10-14 14:42:21.115225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.457 [2024-10-14 14:42:21.115237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.457 [2024-10-14 14:42:21.115477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.457 [2024-10-14 14:42:21.115699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.457 [2024-10-14 14:42:21.115709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.457 [2024-10-14 14:42:21.115716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.457 [2024-10-14 14:42:21.119270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.457 [2024-10-14 14:42:21.128452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.457 [2024-10-14 14:42:21.129083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.457 [2024-10-14 14:42:21.129120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.457 [2024-10-14 14:42:21.129132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.457 [2024-10-14 14:42:21.129371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.457 [2024-10-14 14:42:21.129593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.457 [2024-10-14 14:42:21.129602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.457 [2024-10-14 14:42:21.129614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.457 [2024-10-14 14:42:21.133165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.457 [2024-10-14 14:42:21.142365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.457 [2024-10-14 14:42:21.143037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.457 [2024-10-14 14:42:21.143082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.457 [2024-10-14 14:42:21.143093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.457 [2024-10-14 14:42:21.143331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.457 [2024-10-14 14:42:21.143553] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.457 [2024-10-14 14:42:21.143561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.457 [2024-10-14 14:42:21.143569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.457 [2024-10-14 14:42:21.147119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.457 [2024-10-14 14:42:21.156321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.457 [2024-10-14 14:42:21.156903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.457 [2024-10-14 14:42:21.156921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.457 [2024-10-14 14:42:21.156929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.457 [2024-10-14 14:42:21.157155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.457 [2024-10-14 14:42:21.157375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.458 [2024-10-14 14:42:21.157384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.458 [2024-10-14 14:42:21.157391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.458 [2024-10-14 14:42:21.160931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.458 [2024-10-14 14:42:21.170132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.458 [2024-10-14 14:42:21.170866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.458 [2024-10-14 14:42:21.170904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.458 [2024-10-14 14:42:21.170914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.458 [2024-10-14 14:42:21.171161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.458 [2024-10-14 14:42:21.171384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.458 [2024-10-14 14:42:21.171392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.458 [2024-10-14 14:42:21.171400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.458 [2024-10-14 14:42:21.174945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.458 [2024-10-14 14:42:21.183934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.458 [2024-10-14 14:42:21.184578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.458 [2024-10-14 14:42:21.184616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.458 [2024-10-14 14:42:21.184627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.458 [2024-10-14 14:42:21.184865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.720 [2024-10-14 14:42:21.185097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.720 [2024-10-14 14:42:21.185107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.720 [2024-10-14 14:42:21.185115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.720 [2024-10-14 14:42:21.188661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.720 [2024-10-14 14:42:21.197845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.720 [2024-10-14 14:42:21.198547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.720 [2024-10-14 14:42:21.198584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.720 [2024-10-14 14:42:21.198595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.720 [2024-10-14 14:42:21.198833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.720 [2024-10-14 14:42:21.199054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.720 [2024-10-14 14:42:21.199073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.720 [2024-10-14 14:42:21.199081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.720 [2024-10-14 14:42:21.202628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.720 [2024-10-14 14:42:21.211819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.720 [2024-10-14 14:42:21.212450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.720 [2024-10-14 14:42:21.212487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.720 [2024-10-14 14:42:21.212498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.720 [2024-10-14 14:42:21.212735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.720 [2024-10-14 14:42:21.212957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.720 [2024-10-14 14:42:21.212966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.720 [2024-10-14 14:42:21.212974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.720 [2024-10-14 14:42:21.216527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.720 [2024-10-14 14:42:21.225711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.720 [2024-10-14 14:42:21.226360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.720 [2024-10-14 14:42:21.226399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.720 [2024-10-14 14:42:21.226409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.720 [2024-10-14 14:42:21.226652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.720 [2024-10-14 14:42:21.226875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.720 [2024-10-14 14:42:21.226883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.720 [2024-10-14 14:42:21.226891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.720 [2024-10-14 14:42:21.230447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.720 [2024-10-14 14:42:21.239650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.720 [2024-10-14 14:42:21.240341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.720 [2024-10-14 14:42:21.240378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.720 [2024-10-14 14:42:21.240389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.720 [2024-10-14 14:42:21.240627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.720 [2024-10-14 14:42:21.240849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.720 [2024-10-14 14:42:21.240858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.720 [2024-10-14 14:42:21.240866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.720 [2024-10-14 14:42:21.244420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.720 [2024-10-14 14:42:21.253609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.720 [2024-10-14 14:42:21.254283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.720 [2024-10-14 14:42:21.254321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.720 [2024-10-14 14:42:21.254332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.720 [2024-10-14 14:42:21.254570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.720 [2024-10-14 14:42:21.254792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.720 [2024-10-14 14:42:21.254801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.720 [2024-10-14 14:42:21.254808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.720 [2024-10-14 14:42:21.258362] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.720 [2024-10-14 14:42:21.267548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.720 [2024-10-14 14:42:21.268162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.721 [2024-10-14 14:42:21.268199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.721 [2024-10-14 14:42:21.268212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.721 [2024-10-14 14:42:21.268453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.721 [2024-10-14 14:42:21.268675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.721 [2024-10-14 14:42:21.268691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.721 [2024-10-14 14:42:21.268705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.721 [2024-10-14 14:42:21.272259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.721 [2024-10-14 14:42:21.281456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.721 [2024-10-14 14:42:21.282003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.721 [2024-10-14 14:42:21.282039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.721 [2024-10-14 14:42:21.282051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.721 [2024-10-14 14:42:21.282304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.721 [2024-10-14 14:42:21.282528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.721 [2024-10-14 14:42:21.282537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.721 [2024-10-14 14:42:21.282544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.721 [2024-10-14 14:42:21.286094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.721 [2024-10-14 14:42:21.295357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.721 [2024-10-14 14:42:21.295804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.721 [2024-10-14 14:42:21.295824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.721 [2024-10-14 14:42:21.295832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.721 [2024-10-14 14:42:21.296051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.721 [2024-10-14 14:42:21.296277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.721 [2024-10-14 14:42:21.296286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.721 [2024-10-14 14:42:21.296293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.721 [2024-10-14 14:42:21.299832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.721 [2024-10-14 14:42:21.309223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.721 [2024-10-14 14:42:21.309845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.721 [2024-10-14 14:42:21.309883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.721 [2024-10-14 14:42:21.309894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.721 [2024-10-14 14:42:21.310140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.721 [2024-10-14 14:42:21.310363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.721 [2024-10-14 14:42:21.310372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.721 [2024-10-14 14:42:21.310380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.721 [2024-10-14 14:42:21.313925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.721 [2024-10-14 14:42:21.323119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.721 [2024-10-14 14:42:21.323749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.721 [2024-10-14 14:42:21.323791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.721 [2024-10-14 14:42:21.323802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.721 [2024-10-14 14:42:21.324040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.721 [2024-10-14 14:42:21.324273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.721 [2024-10-14 14:42:21.324283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.721 [2024-10-14 14:42:21.324291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.721 [2024-10-14 14:42:21.327836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.721 [2024-10-14 14:42:21.337026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.721 [2024-10-14 14:42:21.337664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.721 [2024-10-14 14:42:21.337701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.721 [2024-10-14 14:42:21.337712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.721 [2024-10-14 14:42:21.337950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.721 [2024-10-14 14:42:21.338182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.721 [2024-10-14 14:42:21.338191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.721 [2024-10-14 14:42:21.338199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.721 [2024-10-14 14:42:21.341745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.721 [2024-10-14 14:42:21.350934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.721 [2024-10-14 14:42:21.351523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.721 [2024-10-14 14:42:21.351542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.721 [2024-10-14 14:42:21.351550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.721 [2024-10-14 14:42:21.351769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.721 [2024-10-14 14:42:21.351996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.721 [2024-10-14 14:42:21.352005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.721 [2024-10-14 14:42:21.352012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.721 [2024-10-14 14:42:21.355598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.721 [2024-10-14 14:42:21.364784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.721 [2024-10-14 14:42:21.365308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.721 [2024-10-14 14:42:21.365346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.721 [2024-10-14 14:42:21.365358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.721 [2024-10-14 14:42:21.365600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.721 [2024-10-14 14:42:21.365825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.721 [2024-10-14 14:42:21.365835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.721 [2024-10-14 14:42:21.365843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.721 [2024-10-14 14:42:21.369398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.721 [2024-10-14 14:42:21.378608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.721 [2024-10-14 14:42:21.379172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.721 [2024-10-14 14:42:21.379210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.721 [2024-10-14 14:42:21.379222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.721 [2024-10-14 14:42:21.379463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.721 [2024-10-14 14:42:21.379685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.721 [2024-10-14 14:42:21.379694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.721 [2024-10-14 14:42:21.379702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.721 [2024-10-14 14:42:21.383259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.721 [2024-10-14 14:42:21.392447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.721 [2024-10-14 14:42:21.393094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.721 [2024-10-14 14:42:21.393132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.721 [2024-10-14 14:42:21.393143] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.721 [2024-10-14 14:42:21.393381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.721 [2024-10-14 14:42:21.393603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.721 [2024-10-14 14:42:21.393611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.721 [2024-10-14 14:42:21.393619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.721 [2024-10-14 14:42:21.397173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.721 [2024-10-14 14:42:21.406362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.721 [2024-10-14 14:42:21.407030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.721 [2024-10-14 14:42:21.407074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.721 [2024-10-14 14:42:21.407085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.721 [2024-10-14 14:42:21.407323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.721 [2024-10-14 14:42:21.407545] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.721 [2024-10-14 14:42:21.407554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.721 [2024-10-14 14:42:21.407562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.721 [2024-10-14 14:42:21.411115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.722 [2024-10-14 14:42:21.420305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.722 [2024-10-14 14:42:21.420844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.722 [2024-10-14 14:42:21.420863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.722 [2024-10-14 14:42:21.420871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.722 [2024-10-14 14:42:21.421097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.722 [2024-10-14 14:42:21.421316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.722 [2024-10-14 14:42:21.421324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.722 [2024-10-14 14:42:21.421332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.722 [2024-10-14 14:42:21.424870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.722 [2024-10-14 14:42:21.434261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.722 [2024-10-14 14:42:21.434833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.722 [2024-10-14 14:42:21.434850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.722 [2024-10-14 14:42:21.434858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.722 [2024-10-14 14:42:21.435083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.722 [2024-10-14 14:42:21.435302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.722 [2024-10-14 14:42:21.435310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.722 [2024-10-14 14:42:21.435318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.722 [2024-10-14 14:42:21.438855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.722 [2024-10-14 14:42:21.448052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.722 [2024-10-14 14:42:21.448670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.722 [2024-10-14 14:42:21.448708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.722 [2024-10-14 14:42:21.448719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.984 [2024-10-14 14:42:21.448957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.984 [2024-10-14 14:42:21.449190] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.984 [2024-10-14 14:42:21.449200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.984 [2024-10-14 14:42:21.449207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.984 [2024-10-14 14:42:21.452763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.984 [2024-10-14 14:42:21.461952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.984 [2024-10-14 14:42:21.462587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.984 [2024-10-14 14:42:21.462624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.984 [2024-10-14 14:42:21.462640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.984 [2024-10-14 14:42:21.462878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.984 [2024-10-14 14:42:21.463109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.984 [2024-10-14 14:42:21.463119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.984 [2024-10-14 14:42:21.463127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.984 [2024-10-14 14:42:21.466669] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.984 [2024-10-14 14:42:21.475852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.984 [2024-10-14 14:42:21.476411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.984 [2024-10-14 14:42:21.476430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.984 [2024-10-14 14:42:21.476438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.984 [2024-10-14 14:42:21.476657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.984 [2024-10-14 14:42:21.476876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.984 [2024-10-14 14:42:21.476884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.984 [2024-10-14 14:42:21.476892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.984 [2024-10-14 14:42:21.480445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.984 [2024-10-14 14:42:21.489834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.984 [2024-10-14 14:42:21.490364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.984 [2024-10-14 14:42:21.490381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.984 [2024-10-14 14:42:21.490388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.984 [2024-10-14 14:42:21.490607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.984 [2024-10-14 14:42:21.490825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.984 [2024-10-14 14:42:21.490833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.984 [2024-10-14 14:42:21.490840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.984 [2024-10-14 14:42:21.494383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.984 [2024-10-14 14:42:21.503769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.984 [2024-10-14 14:42:21.504344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.984 [2024-10-14 14:42:21.504382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.984 [2024-10-14 14:42:21.504393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.984 [2024-10-14 14:42:21.504631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.984 [2024-10-14 14:42:21.504853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.984 [2024-10-14 14:42:21.504866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.984 [2024-10-14 14:42:21.504874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.984 [2024-10-14 14:42:21.508428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.984 [2024-10-14 14:42:21.517608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.984 [2024-10-14 14:42:21.518332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.984 [2024-10-14 14:42:21.518370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.984 [2024-10-14 14:42:21.518381] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.984 [2024-10-14 14:42:21.518619] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.984 [2024-10-14 14:42:21.518842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.984 [2024-10-14 14:42:21.518850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.984 [2024-10-14 14:42:21.518858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.984 [2024-10-14 14:42:21.522412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.984 [2024-10-14 14:42:21.531399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.984 [2024-10-14 14:42:21.532083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.984 [2024-10-14 14:42:21.532121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.984 [2024-10-14 14:42:21.532132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.984 [2024-10-14 14:42:21.532369] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.985 [2024-10-14 14:42:21.532592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.985 [2024-10-14 14:42:21.532601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.985 [2024-10-14 14:42:21.532609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.985 [2024-10-14 14:42:21.536165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.985 [2024-10-14 14:42:21.545357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.985 [2024-10-14 14:42:21.545994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.985 [2024-10-14 14:42:21.546032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.985 [2024-10-14 14:42:21.546043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.985 [2024-10-14 14:42:21.546289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.985 [2024-10-14 14:42:21.546513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.985 [2024-10-14 14:42:21.546522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.985 [2024-10-14 14:42:21.546529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.985 [2024-10-14 14:42:21.550079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.985 [2024-10-14 14:42:21.559287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.985 [2024-10-14 14:42:21.559955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.985 [2024-10-14 14:42:21.559993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.985 [2024-10-14 14:42:21.560004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.985 [2024-10-14 14:42:21.560252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.985 [2024-10-14 14:42:21.560475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.985 [2024-10-14 14:42:21.560484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.985 [2024-10-14 14:42:21.560492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.985 [2024-10-14 14:42:21.564036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.985 [2024-10-14 14:42:21.573226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.985 [2024-10-14 14:42:21.573897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.985 [2024-10-14 14:42:21.573935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.985 [2024-10-14 14:42:21.573945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.985 [2024-10-14 14:42:21.574190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.985 [2024-10-14 14:42:21.574413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.985 [2024-10-14 14:42:21.574422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.985 [2024-10-14 14:42:21.574430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.985 [2024-10-14 14:42:21.577974] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.985 [2024-10-14 14:42:21.587183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.985 [2024-10-14 14:42:21.587846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.985 [2024-10-14 14:42:21.587884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.985 [2024-10-14 14:42:21.587895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.985 [2024-10-14 14:42:21.588141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.985 [2024-10-14 14:42:21.588364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.985 [2024-10-14 14:42:21.588373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.985 [2024-10-14 14:42:21.588380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.985 [2024-10-14 14:42:21.591927] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.985 [2024-10-14 14:42:21.601230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.985 [2024-10-14 14:42:21.601887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.985 [2024-10-14 14:42:21.601924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.985 [2024-10-14 14:42:21.601935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.985 [2024-10-14 14:42:21.602186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.985 [2024-10-14 14:42:21.602409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.985 [2024-10-14 14:42:21.602418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.985 [2024-10-14 14:42:21.602426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.985 [2024-10-14 14:42:21.605972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.985 [2024-10-14 14:42:21.615158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.985 [2024-10-14 14:42:21.615810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.985 [2024-10-14 14:42:21.615848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.985 [2024-10-14 14:42:21.615859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.985 [2024-10-14 14:42:21.616106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.985 [2024-10-14 14:42:21.616329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.985 [2024-10-14 14:42:21.616337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.985 [2024-10-14 14:42:21.616345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.985 [2024-10-14 14:42:21.619890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.985 [2024-10-14 14:42:21.629082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.985 [2024-10-14 14:42:21.629714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.985 [2024-10-14 14:42:21.629751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.985 [2024-10-14 14:42:21.629762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.985 [2024-10-14 14:42:21.630000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.985 [2024-10-14 14:42:21.630234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.985 [2024-10-14 14:42:21.630244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.985 [2024-10-14 14:42:21.630252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.985 [2024-10-14 14:42:21.633793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.985 [2024-10-14 14:42:21.642990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.985 [2024-10-14 14:42:21.643468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.985 [2024-10-14 14:42:21.643487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.985 [2024-10-14 14:42:21.643495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.985 [2024-10-14 14:42:21.643714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.985 [2024-10-14 14:42:21.643933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.985 [2024-10-14 14:42:21.643941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.985 [2024-10-14 14:42:21.643952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.985 [2024-10-14 14:42:21.647497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.985 [2024-10-14 14:42:21.656911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.985 [2024-10-14 14:42:21.657476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.985 [2024-10-14 14:42:21.657493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.985 [2024-10-14 14:42:21.657501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.985 [2024-10-14 14:42:21.657721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.985 [2024-10-14 14:42:21.657938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.985 [2024-10-14 14:42:21.657947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.985 [2024-10-14 14:42:21.657954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.985 [2024-10-14 14:42:21.661502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.985 [2024-10-14 14:42:21.670886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.985 [2024-10-14 14:42:21.671410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.985 [2024-10-14 14:42:21.671426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.985 [2024-10-14 14:42:21.671434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.985 [2024-10-14 14:42:21.671653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.985 [2024-10-14 14:42:21.671871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.985 [2024-10-14 14:42:21.671879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.985 [2024-10-14 14:42:21.671886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.985 [2024-10-14 14:42:21.675430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.985 [2024-10-14 14:42:21.684829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.985 [2024-10-14 14:42:21.685349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.985 [2024-10-14 14:42:21.685365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.986 [2024-10-14 14:42:21.685373] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.986 [2024-10-14 14:42:21.685591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.986 [2024-10-14 14:42:21.685809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.986 [2024-10-14 14:42:21.685818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.986 [2024-10-14 14:42:21.685825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.986 [2024-10-14 14:42:21.689368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.986 [2024-10-14 14:42:21.698757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:40.986 [2024-10-14 14:42:21.699326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.986 [2024-10-14 14:42:21.699343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:40.986 [2024-10-14 14:42:21.699350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:40.986 [2024-10-14 14:42:21.699569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:40.986 [2024-10-14 14:42:21.699787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:40.986 [2024-10-14 14:42:21.699796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:40.986 [2024-10-14 14:42:21.699803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:40.986 [2024-10-14 14:42:21.703347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.986 [2024-10-14 14:42:21.712740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.248 [2024-10-14 14:42:21.713377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-10-14 14:42:21.713415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.248 [2024-10-14 14:42:21.713427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.248 [2024-10-14 14:42:21.713667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.248 [2024-10-14 14:42:21.713890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.248 [2024-10-14 14:42:21.713898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.248 [2024-10-14 14:42:21.713906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.248 [2024-10-14 14:42:21.717458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.248 [2024-10-14 14:42:21.726647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.248 [2024-10-14 14:42:21.727333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-10-14 14:42:21.727371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.248 [2024-10-14 14:42:21.727383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.248 [2024-10-14 14:42:21.727622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.248 [2024-10-14 14:42:21.727845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.248 [2024-10-14 14:42:21.727854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.248 [2024-10-14 14:42:21.727862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.248 [2024-10-14 14:42:21.731416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.248 [2024-10-14 14:42:21.740543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.248 [2024-10-14 14:42:21.741270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-10-14 14:42:21.741307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.248 [2024-10-14 14:42:21.741318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.248 [2024-10-14 14:42:21.741561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.248 [2024-10-14 14:42:21.741784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.248 [2024-10-14 14:42:21.741793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.248 [2024-10-14 14:42:21.741801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.248 [2024-10-14 14:42:21.745356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.248 [2024-10-14 14:42:21.754345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.248 [2024-10-14 14:42:21.754995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-10-14 14:42:21.755033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.248 [2024-10-14 14:42:21.755043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.248 [2024-10-14 14:42:21.755288] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.248 [2024-10-14 14:42:21.755511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.248 [2024-10-14 14:42:21.755520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.248 [2024-10-14 14:42:21.755528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.248 [2024-10-14 14:42:21.759077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.248 [2024-10-14 14:42:21.768271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.248 [2024-10-14 14:42:21.768892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-10-14 14:42:21.768929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.248 [2024-10-14 14:42:21.768939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.248 [2024-10-14 14:42:21.769187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.248 [2024-10-14 14:42:21.769410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.248 [2024-10-14 14:42:21.769419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.248 [2024-10-14 14:42:21.769426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.248 [2024-10-14 14:42:21.772972] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.248 [2024-10-14 14:42:21.782178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.248 [2024-10-14 14:42:21.782846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-10-14 14:42:21.782884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.248 [2024-10-14 14:42:21.782895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.248 [2024-10-14 14:42:21.783140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.248 [2024-10-14 14:42:21.783371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.248 [2024-10-14 14:42:21.783380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.248 [2024-10-14 14:42:21.783388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.248 [2024-10-14 14:42:21.786937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.248 [2024-10-14 14:42:21.796127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.248 [2024-10-14 14:42:21.796753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-10-14 14:42:21.796790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.248 [2024-10-14 14:42:21.796801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.248 [2024-10-14 14:42:21.797039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.248 [2024-10-14 14:42:21.797269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.248 [2024-10-14 14:42:21.797280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.248 [2024-10-14 14:42:21.797290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.248 [2024-10-14 14:42:21.800844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.248 [2024-10-14 14:42:21.810042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.248 [2024-10-14 14:42:21.810670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-10-14 14:42:21.810708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.248 [2024-10-14 14:42:21.810719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.248 [2024-10-14 14:42:21.810957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.248 [2024-10-14 14:42:21.811186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.248 [2024-10-14 14:42:21.811195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.248 [2024-10-14 14:42:21.811203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.248 [2024-10-14 14:42:21.814751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.248 [2024-10-14 14:42:21.823940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.248 [2024-10-14 14:42:21.824569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.248 [2024-10-14 14:42:21.824607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.248 [2024-10-14 14:42:21.824617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.249 [2024-10-14 14:42:21.824855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.249 [2024-10-14 14:42:21.825087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.249 [2024-10-14 14:42:21.825096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.249 [2024-10-14 14:42:21.825104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.249 [2024-10-14 14:42:21.828656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.249 6132.00 IOPS, 23.95 MiB/s [2024-10-14T12:42:21.976Z] [2024-10-14 14:42:21.837836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.249 [2024-10-14 14:42:21.838505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-10-14 14:42:21.838547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.249 [2024-10-14 14:42:21.838558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.249 [2024-10-14 14:42:21.838796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.249 [2024-10-14 14:42:21.839019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.249 [2024-10-14 14:42:21.839027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.249 [2024-10-14 14:42:21.839035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.249 [2024-10-14 14:42:21.842588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.249 [2024-10-14 14:42:21.851776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.249 [2024-10-14 14:42:21.852402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-10-14 14:42:21.852440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.249 [2024-10-14 14:42:21.852451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.249 [2024-10-14 14:42:21.852689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.249 [2024-10-14 14:42:21.852911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.249 [2024-10-14 14:42:21.852920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.249 [2024-10-14 14:42:21.852927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.249 [2024-10-14 14:42:21.856491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.249 [2024-10-14 14:42:21.865693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.249 [2024-10-14 14:42:21.866365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-10-14 14:42:21.866402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.249 [2024-10-14 14:42:21.866413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.249 [2024-10-14 14:42:21.866651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.249 [2024-10-14 14:42:21.866873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.249 [2024-10-14 14:42:21.866882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.249 [2024-10-14 14:42:21.866890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.249 [2024-10-14 14:42:21.870446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.249 [2024-10-14 14:42:21.879650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.249 [2024-10-14 14:42:21.880287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-10-14 14:42:21.880324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.249 [2024-10-14 14:42:21.880335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.249 [2024-10-14 14:42:21.880573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.249 [2024-10-14 14:42:21.880803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.249 [2024-10-14 14:42:21.880811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.249 [2024-10-14 14:42:21.880819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.249 [2024-10-14 14:42:21.884372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.249 [2024-10-14 14:42:21.893561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.249 [2024-10-14 14:42:21.894165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-10-14 14:42:21.894203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.249 [2024-10-14 14:42:21.894213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.249 [2024-10-14 14:42:21.894451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.249 [2024-10-14 14:42:21.894674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.249 [2024-10-14 14:42:21.894682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.249 [2024-10-14 14:42:21.894690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.249 [2024-10-14 14:42:21.898245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.249 [2024-10-14 14:42:21.907427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.249 [2024-10-14 14:42:21.908102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-10-14 14:42:21.908139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.249 [2024-10-14 14:42:21.908151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.249 [2024-10-14 14:42:21.908393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.249 [2024-10-14 14:42:21.908615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.249 [2024-10-14 14:42:21.908624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.249 [2024-10-14 14:42:21.908631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.249 [2024-10-14 14:42:21.912188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.249 [2024-10-14 14:42:21.921378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.249 [2024-10-14 14:42:21.922045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-10-14 14:42:21.922089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.249 [2024-10-14 14:42:21.922100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.249 [2024-10-14 14:42:21.922338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.249 [2024-10-14 14:42:21.922561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.249 [2024-10-14 14:42:21.922570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.249 [2024-10-14 14:42:21.922577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.249 [2024-10-14 14:42:21.926134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.249 [2024-10-14 14:42:21.935327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.249 [2024-10-14 14:42:21.935999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-10-14 14:42:21.936036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.249 [2024-10-14 14:42:21.936048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.249 [2024-10-14 14:42:21.936296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.249 [2024-10-14 14:42:21.936519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.249 [2024-10-14 14:42:21.936528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.249 [2024-10-14 14:42:21.936536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.249 [2024-10-14 14:42:21.940078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.249 [2024-10-14 14:42:21.949269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.249 [2024-10-14 14:42:21.949884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-10-14 14:42:21.949921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.249 [2024-10-14 14:42:21.949932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.249 [2024-10-14 14:42:21.950179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.249 [2024-10-14 14:42:21.950402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.249 [2024-10-14 14:42:21.950411] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.249 [2024-10-14 14:42:21.950418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.249 [2024-10-14 14:42:21.953973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.249 [2024-10-14 14:42:21.963164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.249 [2024-10-14 14:42:21.963830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.249 [2024-10-14 14:42:21.963867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.249 [2024-10-14 14:42:21.963878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.249 [2024-10-14 14:42:21.964125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.249 [2024-10-14 14:42:21.964348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.249 [2024-10-14 14:42:21.964357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.249 [2024-10-14 14:42:21.964365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.249 [2024-10-14 14:42:21.967911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.512 [2024-10-14 14:42:21.977108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.512 [2024-10-14 14:42:21.977766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.512 [2024-10-14 14:42:21.977804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.512 [2024-10-14 14:42:21.977820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.512 [2024-10-14 14:42:21.978058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.512 [2024-10-14 14:42:21.978292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.512 [2024-10-14 14:42:21.978301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.512 [2024-10-14 14:42:21.978308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.512 [2024-10-14 14:42:21.981863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.512 [2024-10-14 14:42:21.991055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.512 [2024-10-14 14:42:21.991595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.512 [2024-10-14 14:42:21.991631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.512 [2024-10-14 14:42:21.991642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.512 [2024-10-14 14:42:21.991880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.512 [2024-10-14 14:42:21.992111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.512 [2024-10-14 14:42:21.992121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.512 [2024-10-14 14:42:21.992128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.512 [2024-10-14 14:42:21.995674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.512 [2024-10-14 14:42:22.004858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.512 [2024-10-14 14:42:22.005405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.512 [2024-10-14 14:42:22.005424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.512 [2024-10-14 14:42:22.005431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.512 [2024-10-14 14:42:22.005651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.512 [2024-10-14 14:42:22.005869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.512 [2024-10-14 14:42:22.005877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.512 [2024-10-14 14:42:22.005884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.512 [2024-10-14 14:42:22.009432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.512 [2024-10-14 14:42:22.018816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.512 [2024-10-14 14:42:22.019336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.512 [2024-10-14 14:42:22.019352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.512 [2024-10-14 14:42:22.019360] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.512 [2024-10-14 14:42:22.019579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.512 [2024-10-14 14:42:22.019797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.512 [2024-10-14 14:42:22.019809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.512 [2024-10-14 14:42:22.019816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.512 [2024-10-14 14:42:22.023357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.512 [2024-10-14 14:42:22.032747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.512 [2024-10-14 14:42:22.033348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.512 [2024-10-14 14:42:22.033386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.512 [2024-10-14 14:42:22.033396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.512 [2024-10-14 14:42:22.033635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.512 [2024-10-14 14:42:22.033857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.512 [2024-10-14 14:42:22.033866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.512 [2024-10-14 14:42:22.033874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.512 [2024-10-14 14:42:22.037432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.512 [2024-10-14 14:42:22.046627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.512 [2024-10-14 14:42:22.047195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.512 [2024-10-14 14:42:22.047234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.512 [2024-10-14 14:42:22.047246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.512 [2024-10-14 14:42:22.047485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.512 [2024-10-14 14:42:22.047708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.512 [2024-10-14 14:42:22.047717] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.512 [2024-10-14 14:42:22.047725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.512 [2024-10-14 14:42:22.051278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.512 [2024-10-14 14:42:22.060478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.512 [2024-10-14 14:42:22.061028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.512 [2024-10-14 14:42:22.061047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.512 [2024-10-14 14:42:22.061056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.512 [2024-10-14 14:42:22.061283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.512 [2024-10-14 14:42:22.061503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.512 [2024-10-14 14:42:22.061513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.512 [2024-10-14 14:42:22.061521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.512 [2024-10-14 14:42:22.065058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.512 [2024-10-14 14:42:22.074475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.512 [2024-10-14 14:42:22.075142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.512 [2024-10-14 14:42:22.075180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.512 [2024-10-14 14:42:22.075192] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.513 [2024-10-14 14:42:22.075433] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.513 [2024-10-14 14:42:22.075656] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.513 [2024-10-14 14:42:22.075664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.513 [2024-10-14 14:42:22.075672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.513 [2024-10-14 14:42:22.079233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.513 [2024-10-14 14:42:22.088421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.513 [2024-10-14 14:42:22.089119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.513 [2024-10-14 14:42:22.089157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.513 [2024-10-14 14:42:22.089169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.513 [2024-10-14 14:42:22.089411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.513 [2024-10-14 14:42:22.089633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.513 [2024-10-14 14:42:22.089641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.513 [2024-10-14 14:42:22.089649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.513 [2024-10-14 14:42:22.093202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.513 [2024-10-14 14:42:22.102378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.513 [2024-10-14 14:42:22.103048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.513 [2024-10-14 14:42:22.103092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.513 [2024-10-14 14:42:22.103103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.513 [2024-10-14 14:42:22.103340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.513 [2024-10-14 14:42:22.103563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.513 [2024-10-14 14:42:22.103572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.513 [2024-10-14 14:42:22.103579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.513 [2024-10-14 14:42:22.107130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.513 [2024-10-14 14:42:22.116310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.513 [2024-10-14 14:42:22.116869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.513 [2024-10-14 14:42:22.116888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.513 [2024-10-14 14:42:22.116896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.513 [2024-10-14 14:42:22.117127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.513 [2024-10-14 14:42:22.117347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.513 [2024-10-14 14:42:22.117356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.513 [2024-10-14 14:42:22.117364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.513 [2024-10-14 14:42:22.120909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.513 [2024-10-14 14:42:22.130102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.513 [2024-10-14 14:42:22.130726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.513 [2024-10-14 14:42:22.130763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.513 [2024-10-14 14:42:22.130774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.513 [2024-10-14 14:42:22.131011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.513 [2024-10-14 14:42:22.131242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.513 [2024-10-14 14:42:22.131252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.513 [2024-10-14 14:42:22.131261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.513 [2024-10-14 14:42:22.134807] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.513 [2024-10-14 14:42:22.144005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.513 [2024-10-14 14:42:22.144561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.513 [2024-10-14 14:42:22.144581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.513 [2024-10-14 14:42:22.144589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.513 [2024-10-14 14:42:22.144808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.513 [2024-10-14 14:42:22.145036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.513 [2024-10-14 14:42:22.145046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.513 [2024-10-14 14:42:22.145053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.513 [2024-10-14 14:42:22.148600] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.513 [2024-10-14 14:42:22.158001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.513 [2024-10-14 14:42:22.158634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.513 [2024-10-14 14:42:22.158672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.513 [2024-10-14 14:42:22.158685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.513 [2024-10-14 14:42:22.158927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.513 [2024-10-14 14:42:22.159161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.513 [2024-10-14 14:42:22.159170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.513 [2024-10-14 14:42:22.159183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.513 [2024-10-14 14:42:22.162734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.513 [2024-10-14 14:42:22.171925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.513 [2024-10-14 14:42:22.172510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.513 [2024-10-14 14:42:22.172529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.513 [2024-10-14 14:42:22.172537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.513 [2024-10-14 14:42:22.172756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.513 [2024-10-14 14:42:22.172975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.513 [2024-10-14 14:42:22.172983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.513 [2024-10-14 14:42:22.172990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.513 [2024-10-14 14:42:22.176536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.513 [2024-10-14 14:42:22.185738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.513 [2024-10-14 14:42:22.186262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.513 [2024-10-14 14:42:22.186279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.513 [2024-10-14 14:42:22.186287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.513 [2024-10-14 14:42:22.186506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.513 [2024-10-14 14:42:22.186724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.513 [2024-10-14 14:42:22.186732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.513 [2024-10-14 14:42:22.186740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.513 [2024-10-14 14:42:22.190284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.513 [2024-10-14 14:42:22.199673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.513 [2024-10-14 14:42:22.200220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.513 [2024-10-14 14:42:22.200257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.513 [2024-10-14 14:42:22.200269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.513 [2024-10-14 14:42:22.200508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.513 [2024-10-14 14:42:22.200730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.513 [2024-10-14 14:42:22.200739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.513 [2024-10-14 14:42:22.200747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.513 [2024-10-14 14:42:22.204306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.513 [2024-10-14 14:42:22.213523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.513 [2024-10-14 14:42:22.213999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.513 [2024-10-14 14:42:22.214017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.514 [2024-10-14 14:42:22.214025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.514 [2024-10-14 14:42:22.214251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.514 [2024-10-14 14:42:22.214470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.514 [2024-10-14 14:42:22.214479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.514 [2024-10-14 14:42:22.214487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.514 [2024-10-14 14:42:22.218042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.514 [2024-10-14 14:42:22.227444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.514 [2024-10-14 14:42:22.228096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.514 [2024-10-14 14:42:22.228135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.514 [2024-10-14 14:42:22.228147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.514 [2024-10-14 14:42:22.228386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.514 [2024-10-14 14:42:22.228608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.514 [2024-10-14 14:42:22.228617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.514 [2024-10-14 14:42:22.228625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.514 [2024-10-14 14:42:22.232184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.776 [2024-10-14 14:42:22.241400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.776 [2024-10-14 14:42:22.242088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.776 [2024-10-14 14:42:22.242126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.776 [2024-10-14 14:42:22.242137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.776 [2024-10-14 14:42:22.242375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.776 [2024-10-14 14:42:22.242598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.776 [2024-10-14 14:42:22.242607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.776 [2024-10-14 14:42:22.242614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.776 [2024-10-14 14:42:22.246167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.776 [2024-10-14 14:42:22.255371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.776 [2024-10-14 14:42:22.255916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.776 [2024-10-14 14:42:22.255934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.776 [2024-10-14 14:42:22.255942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.776 [2024-10-14 14:42:22.256171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.776 [2024-10-14 14:42:22.256391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.776 [2024-10-14 14:42:22.256399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.776 [2024-10-14 14:42:22.256407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.776 [2024-10-14 14:42:22.259946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.776 [2024-10-14 14:42:22.269354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.776 [2024-10-14 14:42:22.269981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.776 [2024-10-14 14:42:22.270019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.776 [2024-10-14 14:42:22.270030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.776 [2024-10-14 14:42:22.270281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.776 [2024-10-14 14:42:22.270504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.776 [2024-10-14 14:42:22.270513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.776 [2024-10-14 14:42:22.270521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.776 [2024-10-14 14:42:22.274076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.776 [2024-10-14 14:42:22.283302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.776 [2024-10-14 14:42:22.283885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.776 [2024-10-14 14:42:22.283904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.776 [2024-10-14 14:42:22.283912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.776 [2024-10-14 14:42:22.284139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.776 [2024-10-14 14:42:22.284358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.776 [2024-10-14 14:42:22.284366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.776 [2024-10-14 14:42:22.284373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.776 [2024-10-14 14:42:22.287917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.776 [2024-10-14 14:42:22.297123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.776 [2024-10-14 14:42:22.297643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.776 [2024-10-14 14:42:22.297681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.776 [2024-10-14 14:42:22.297693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.776 [2024-10-14 14:42:22.297935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.776 [2024-10-14 14:42:22.298168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.776 [2024-10-14 14:42:22.298178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.776 [2024-10-14 14:42:22.298190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.776 [2024-10-14 14:42:22.301741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.776 [2024-10-14 14:42:22.310951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.776 [2024-10-14 14:42:22.311495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.776 [2024-10-14 14:42:22.311515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.776 [2024-10-14 14:42:22.311523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.776 [2024-10-14 14:42:22.311742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.776 [2024-10-14 14:42:22.311960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.776 [2024-10-14 14:42:22.311968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.776 [2024-10-14 14:42:22.311975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.776 [2024-10-14 14:42:22.315533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.776 [2024-10-14 14:42:22.324936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.776 [2024-10-14 14:42:22.325576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.776 [2024-10-14 14:42:22.325614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.776 [2024-10-14 14:42:22.325624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.776 [2024-10-14 14:42:22.325862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.776 [2024-10-14 14:42:22.326095] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.776 [2024-10-14 14:42:22.326104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.776 [2024-10-14 14:42:22.326113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.776 [2024-10-14 14:42:22.329665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.776 [2024-10-14 14:42:22.338875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.776 [2024-10-14 14:42:22.339422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.776 [2024-10-14 14:42:22.339442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.776 [2024-10-14 14:42:22.339450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.776 [2024-10-14 14:42:22.339670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.776 [2024-10-14 14:42:22.339888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.776 [2024-10-14 14:42:22.339896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.776 [2024-10-14 14:42:22.339904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.776 [2024-10-14 14:42:22.343453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.776 [2024-10-14 14:42:22.352851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.776 [2024-10-14 14:42:22.353524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.776 [2024-10-14 14:42:22.353566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.776 [2024-10-14 14:42:22.353577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.776 [2024-10-14 14:42:22.353816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.776 [2024-10-14 14:42:22.354038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.776 [2024-10-14 14:42:22.354047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.776 [2024-10-14 14:42:22.354055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.776 [2024-10-14 14:42:22.357624] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.777 [2024-10-14 14:42:22.366831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.777 [2024-10-14 14:42:22.367379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.777 [2024-10-14 14:42:22.367398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.777 [2024-10-14 14:42:22.367406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.777 [2024-10-14 14:42:22.367626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.777 [2024-10-14 14:42:22.367844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.777 [2024-10-14 14:42:22.367853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.777 [2024-10-14 14:42:22.367861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.777 [2024-10-14 14:42:22.371411] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.777 [2024-10-14 14:42:22.380827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.777 [2024-10-14 14:42:22.381363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.777 [2024-10-14 14:42:22.381379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.777 [2024-10-14 14:42:22.381387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.777 [2024-10-14 14:42:22.381605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.777 [2024-10-14 14:42:22.381824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.777 [2024-10-14 14:42:22.381833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.777 [2024-10-14 14:42:22.381840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.777 [2024-10-14 14:42:22.385386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.777 [2024-10-14 14:42:22.394790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.777 [2024-10-14 14:42:22.395351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.777 [2024-10-14 14:42:22.395368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.777 [2024-10-14 14:42:22.395375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.777 [2024-10-14 14:42:22.395594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.777 [2024-10-14 14:42:22.395816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.777 [2024-10-14 14:42:22.395825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.777 [2024-10-14 14:42:22.395832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.777 [2024-10-14 14:42:22.399383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.777 [2024-10-14 14:42:22.408568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.777 [2024-10-14 14:42:22.409092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.777 [2024-10-14 14:42:22.409108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.777 [2024-10-14 14:42:22.409115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.777 [2024-10-14 14:42:22.409333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.777 [2024-10-14 14:42:22.409551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.777 [2024-10-14 14:42:22.409559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.777 [2024-10-14 14:42:22.409567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.777 [2024-10-14 14:42:22.413110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.777 [2024-10-14 14:42:22.422511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.777 [2024-10-14 14:42:22.423075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.777 [2024-10-14 14:42:22.423091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.777 [2024-10-14 14:42:22.423098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.777 [2024-10-14 14:42:22.423317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.777 [2024-10-14 14:42:22.423535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.777 [2024-10-14 14:42:22.423543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.777 [2024-10-14 14:42:22.423550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.777 [2024-10-14 14:42:22.427093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.777 [2024-10-14 14:42:22.436502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.777 [2024-10-14 14:42:22.437171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.777 [2024-10-14 14:42:22.437210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.777 [2024-10-14 14:42:22.437220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.777 [2024-10-14 14:42:22.437458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.777 [2024-10-14 14:42:22.437681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.777 [2024-10-14 14:42:22.437690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.777 [2024-10-14 14:42:22.437697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.777 [2024-10-14 14:42:22.441254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.777 [2024-10-14 14:42:22.450449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.777 [2024-10-14 14:42:22.451146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.777 [2024-10-14 14:42:22.451183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.777 [2024-10-14 14:42:22.451195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.777 [2024-10-14 14:42:22.451435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.777 [2024-10-14 14:42:22.451657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.777 [2024-10-14 14:42:22.451667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.777 [2024-10-14 14:42:22.451674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.777 [2024-10-14 14:42:22.455227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.777 [2024-10-14 14:42:22.464432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.777 [2024-10-14 14:42:22.465050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.777 [2024-10-14 14:42:22.465096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.777 [2024-10-14 14:42:22.465108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.777 [2024-10-14 14:42:22.465347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.777 [2024-10-14 14:42:22.465569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.777 [2024-10-14 14:42:22.465578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.777 [2024-10-14 14:42:22.465586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.777 [2024-10-14 14:42:22.469136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.777 [2024-10-14 14:42:22.478324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.777 [2024-10-14 14:42:22.479009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.777 [2024-10-14 14:42:22.479047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.777 [2024-10-14 14:42:22.479059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.777 [2024-10-14 14:42:22.479308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.777 [2024-10-14 14:42:22.479530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.777 [2024-10-14 14:42:22.479539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.777 [2024-10-14 14:42:22.479546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.777 [2024-10-14 14:42:22.483105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:41.777 [2024-10-14 14:42:22.492316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:41.777 [2024-10-14 14:42:22.492854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:41.777 [2024-10-14 14:42:22.492873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:41.777 [2024-10-14 14:42:22.492886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:41.777 [2024-10-14 14:42:22.493110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:41.777 [2024-10-14 14:42:22.493330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:41.777 [2024-10-14 14:42:22.493338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:41.777 [2024-10-14 14:42:22.493345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:41.777 [2024-10-14 14:42:22.496886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.040 [2024-10-14 14:42:22.506277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.040 [2024-10-14 14:42:22.506694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.040 [2024-10-14 14:42:22.506714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.040 [2024-10-14 14:42:22.506722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.040 [2024-10-14 14:42:22.506942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.040 [2024-10-14 14:42:22.507169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.040 [2024-10-14 14:42:22.507179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.040 [2024-10-14 14:42:22.507186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.040 [2024-10-14 14:42:22.510724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.040 [2024-10-14 14:42:22.520116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.040 [2024-10-14 14:42:22.520668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.040 [2024-10-14 14:42:22.520684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.040 [2024-10-14 14:42:22.520691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.040 [2024-10-14 14:42:22.520910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.040 [2024-10-14 14:42:22.521134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.040 [2024-10-14 14:42:22.521144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.040 [2024-10-14 14:42:22.521151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.040 [2024-10-14 14:42:22.524687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3569706 Killed "${NVMF_APP[@]}" "$@" 00:28:42.040 14:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:42.040 14:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:42.040 14:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:42.040 14:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:42.040 [2024-10-14 14:42:22.534078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.040 14:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.040 [2024-10-14 14:42:22.534735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.040 [2024-10-14 14:42:22.534779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.040 [2024-10-14 14:42:22.534790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.040 [2024-10-14 14:42:22.535029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.040 [2024-10-14 14:42:22.535267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.040 [2024-10-14 14:42:22.535279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.040 [2024-10-14 14:42:22.535287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.040 [2024-10-14 14:42:22.538837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.040 14:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=3571331 00:28:42.040 14:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 3571331 00:28:42.040 14:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:42.040 14:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3571331 ']' 00:28:42.040 14:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.040 14:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:42.040 14:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.040 14:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:42.040 14:42:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.040 [2024-10-14 14:42:22.548030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.040 [2024-10-14 14:42:22.548705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.040 [2024-10-14 14:42:22.548744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.040 [2024-10-14 14:42:22.548756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.040 [2024-10-14 14:42:22.548995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.040 [2024-10-14 14:42:22.549227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.040 [2024-10-14 14:42:22.549236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.040 [2024-10-14 14:42:22.549244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.040 [2024-10-14 14:42:22.552792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.040 [2024-10-14 14:42:22.562009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.040 [2024-10-14 14:42:22.562643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.040 [2024-10-14 14:42:22.562681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.040 [2024-10-14 14:42:22.562692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.040 [2024-10-14 14:42:22.562930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.040 [2024-10-14 14:42:22.563160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.040 [2024-10-14 14:42:22.563178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.040 [2024-10-14 14:42:22.563186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.040 [2024-10-14 14:42:22.566734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.040 [2024-10-14 14:42:22.575933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.040 [2024-10-14 14:42:22.576513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.040 [2024-10-14 14:42:22.576533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.040 [2024-10-14 14:42:22.576541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.040 [2024-10-14 14:42:22.576760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.040 [2024-10-14 14:42:22.576979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.040 [2024-10-14 14:42:22.576988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.040 [2024-10-14 14:42:22.576996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.040 [2024-10-14 14:42:22.580558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.040 [2024-10-14 14:42:22.589751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.040 [2024-10-14 14:42:22.590212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.040 [2024-10-14 14:42:22.590249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.040 [2024-10-14 14:42:22.590261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.040 [2024-10-14 14:42:22.590498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.040 [2024-10-14 14:42:22.590721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.040 [2024-10-14 14:42:22.590729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.040 [2024-10-14 14:42:22.590737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.040 [2024-10-14 14:42:22.593167] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:28:42.040 [2024-10-14 14:42:22.593212] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.040 [2024-10-14 14:42:22.594293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.040 [2024-10-14 14:42:22.603694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.040 [2024-10-14 14:42:22.604245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.040 [2024-10-14 14:42:22.604263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.040 [2024-10-14 14:42:22.604272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.040 [2024-10-14 14:42:22.604491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.040 [2024-10-14 14:42:22.604709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.040 [2024-10-14 14:42:22.604722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.041 [2024-10-14 14:42:22.604730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.041 [2024-10-14 14:42:22.608281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.041 [2024-10-14 14:42:22.617677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.041 [2024-10-14 14:42:22.618108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.041 [2024-10-14 14:42:22.618132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.041 [2024-10-14 14:42:22.618141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.041 [2024-10-14 14:42:22.618364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.041 [2024-10-14 14:42:22.618585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.041 [2024-10-14 14:42:22.618594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.041 [2024-10-14 14:42:22.618601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.041 [2024-10-14 14:42:22.622150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.041 [2024-10-14 14:42:22.631634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.041 [2024-10-14 14:42:22.632112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.041 [2024-10-14 14:42:22.632132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.041 [2024-10-14 14:42:22.632139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.041 [2024-10-14 14:42:22.632359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.041 [2024-10-14 14:42:22.632578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.041 [2024-10-14 14:42:22.632585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.041 [2024-10-14 14:42:22.632593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.041 [2024-10-14 14:42:22.636142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.041 [2024-10-14 14:42:22.645543] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.041 [2024-10-14 14:42:22.646050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.041 [2024-10-14 14:42:22.646072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.041 [2024-10-14 14:42:22.646080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.041 [2024-10-14 14:42:22.646299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.041 [2024-10-14 14:42:22.646517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.041 [2024-10-14 14:42:22.646526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.041 [2024-10-14 14:42:22.646533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.041 [2024-10-14 14:42:22.650076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.041 [2024-10-14 14:42:22.659482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.041 [2024-10-14 14:42:22.660052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.041 [2024-10-14 14:42:22.660073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.041 [2024-10-14 14:42:22.660081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.041 [2024-10-14 14:42:22.660300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.041 [2024-10-14 14:42:22.660519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.041 [2024-10-14 14:42:22.660527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.041 [2024-10-14 14:42:22.660535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.041 [2024-10-14 14:42:22.664078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.041 [2024-10-14 14:42:22.673472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.041 [2024-10-14 14:42:22.674009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.041 [2024-10-14 14:42:22.674024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.041 [2024-10-14 14:42:22.674033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.041 [2024-10-14 14:42:22.674256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.041 [2024-10-14 14:42:22.674475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.041 [2024-10-14 14:42:22.674483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.041 [2024-10-14 14:42:22.674490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.041 [2024-10-14 14:42:22.677380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:42.041 [2024-10-14 14:42:22.678023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.041 [2024-10-14 14:42:22.687439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.041 [2024-10-14 14:42:22.687858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.041 [2024-10-14 14:42:22.687874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.041 [2024-10-14 14:42:22.687882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.041 [2024-10-14 14:42:22.688106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.041 [2024-10-14 14:42:22.688325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.041 [2024-10-14 14:42:22.688334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.041 [2024-10-14 14:42:22.688342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.041 [2024-10-14 14:42:22.691882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.041 [2024-10-14 14:42:22.701302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.041 [2024-10-14 14:42:22.701944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.041 [2024-10-14 14:42:22.701983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.041 [2024-10-14 14:42:22.701999] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.041 [2024-10-14 14:42:22.702248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.041 [2024-10-14 14:42:22.702472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.041 [2024-10-14 14:42:22.702481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.041 [2024-10-14 14:42:22.702488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.041 [2024-10-14 14:42:22.706037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.041 [2024-10-14 14:42:22.706664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.041 [2024-10-14 14:42:22.706685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.041 [2024-10-14 14:42:22.706692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.041 [2024-10-14 14:42:22.706698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.041 [2024-10-14 14:42:22.706702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.041 [2024-10-14 14:42:22.707778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.041 [2024-10-14 14:42:22.707933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.041 [2024-10-14 14:42:22.707935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.041 [2024-10-14 14:42:22.715237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.041 [2024-10-14 14:42:22.715807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.041 [2024-10-14 14:42:22.715846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.041 [2024-10-14 14:42:22.715859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.041 [2024-10-14 14:42:22.716108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.041 [2024-10-14 14:42:22.716331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.041 [2024-10-14 14:42:22.716340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.041 [2024-10-14 14:42:22.716348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.041 [2024-10-14 14:42:22.719891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.041 [2024-10-14 14:42:22.729087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.041 [2024-10-14 14:42:22.729546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.041 [2024-10-14 14:42:22.729565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.041 [2024-10-14 14:42:22.729574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.041 [2024-10-14 14:42:22.729793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.041 [2024-10-14 14:42:22.730013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.041 [2024-10-14 14:42:22.730021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.041 [2024-10-14 14:42:22.730029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.041 [2024-10-14 14:42:22.733580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.041 [2024-10-14 14:42:22.743002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.041 [2024-10-14 14:42:22.743691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.041 [2024-10-14 14:42:22.743732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.041 [2024-10-14 14:42:22.743744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.041 [2024-10-14 14:42:22.743985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.041 [2024-10-14 14:42:22.744216] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.041 [2024-10-14 14:42:22.744226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.042 [2024-10-14 14:42:22.744234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.042 [2024-10-14 14:42:22.747783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.042 [2024-10-14 14:42:22.756988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.042 [2024-10-14 14:42:22.757421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.042 [2024-10-14 14:42:22.757440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.042 [2024-10-14 14:42:22.757449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.042 [2024-10-14 14:42:22.757669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.042 [2024-10-14 14:42:22.757888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.042 [2024-10-14 14:42:22.757896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.042 [2024-10-14 14:42:22.757903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.042 [2024-10-14 14:42:22.761543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.304 [2024-10-14 14:42:22.770946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.304 [2024-10-14 14:42:22.771247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.304 [2024-10-14 14:42:22.771272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.304 [2024-10-14 14:42:22.771281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.304 [2024-10-14 14:42:22.771505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.304 [2024-10-14 14:42:22.771724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.304 [2024-10-14 14:42:22.771733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.304 [2024-10-14 14:42:22.771741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.304 [2024-10-14 14:42:22.775292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.304 [2024-10-14 14:42:22.784909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.304 [2024-10-14 14:42:22.785358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.304 [2024-10-14 14:42:22.785397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.304 [2024-10-14 14:42:22.785414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.304 [2024-10-14 14:42:22.785655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.304 [2024-10-14 14:42:22.785877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.304 [2024-10-14 14:42:22.785886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.304 [2024-10-14 14:42:22.785893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.304 [2024-10-14 14:42:22.789444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.304 [2024-10-14 14:42:22.798851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.304 [2024-10-14 14:42:22.799494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.304 [2024-10-14 14:42:22.799532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.304 [2024-10-14 14:42:22.799545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.304 [2024-10-14 14:42:22.799785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.304 [2024-10-14 14:42:22.800008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.304 [2024-10-14 14:42:22.800018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.304 [2024-10-14 14:42:22.800027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.304 [2024-10-14 14:42:22.803582] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.304 [2024-10-14 14:42:22.812776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.304 [2024-10-14 14:42:22.813455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.304 [2024-10-14 14:42:22.813494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.304 [2024-10-14 14:42:22.813505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.304 [2024-10-14 14:42:22.813743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.304 [2024-10-14 14:42:22.813966] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.304 [2024-10-14 14:42:22.813975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.304 [2024-10-14 14:42:22.813984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.304 [2024-10-14 14:42:22.817542] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.304 [2024-10-14 14:42:22.826757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.304 [2024-10-14 14:42:22.827480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.304 [2024-10-14 14:42:22.827518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.304 [2024-10-14 14:42:22.827529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.304 [2024-10-14 14:42:22.827767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.304 [2024-10-14 14:42:22.827990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.304 [2024-10-14 14:42:22.828003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.304 [2024-10-14 14:42:22.828012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.304 [2024-10-14 14:42:22.831567] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.304 5110.00 IOPS, 19.96 MiB/s [2024-10-14T12:42:23.031Z] [2024-10-14 14:42:22.840753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.304 [2024-10-14 14:42:22.841485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.304 [2024-10-14 14:42:22.841523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.304 [2024-10-14 14:42:22.841535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.304 [2024-10-14 14:42:22.841774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.304 [2024-10-14 14:42:22.841996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.304 [2024-10-14 14:42:22.842005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.304 [2024-10-14 14:42:22.842013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.304 [2024-10-14 14:42:22.845569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.304 [2024-10-14 14:42:22.854554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.304 [2024-10-14 14:42:22.855084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.304 [2024-10-14 14:42:22.855123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.304 [2024-10-14 14:42:22.855135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.304 [2024-10-14 14:42:22.855374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.304 [2024-10-14 14:42:22.855596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.304 [2024-10-14 14:42:22.855605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.304 [2024-10-14 14:42:22.855613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.304 [2024-10-14 14:42:22.859176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.304 [2024-10-14 14:42:22.868369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.304 [2024-10-14 14:42:22.869053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.304 [2024-10-14 14:42:22.869099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.304 [2024-10-14 14:42:22.869111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.305 [2024-10-14 14:42:22.869352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.305 [2024-10-14 14:42:22.869576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.305 [2024-10-14 14:42:22.869584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.305 [2024-10-14 14:42:22.869592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.305 [2024-10-14 14:42:22.873143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.305 [2024-10-14 14:42:22.882341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.305 [2024-10-14 14:42:22.883013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.305 [2024-10-14 14:42:22.883052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.305 [2024-10-14 14:42:22.883070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.305 [2024-10-14 14:42:22.883309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.305 [2024-10-14 14:42:22.883531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.305 [2024-10-14 14:42:22.883540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.305 [2024-10-14 14:42:22.883547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.305 [2024-10-14 14:42:22.887095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.305 [2024-10-14 14:42:22.896281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.305 [2024-10-14 14:42:22.896874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.305 [2024-10-14 14:42:22.896892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.305 [2024-10-14 14:42:22.896900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.305 [2024-10-14 14:42:22.897125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.305 [2024-10-14 14:42:22.897344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.305 [2024-10-14 14:42:22.897352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.305 [2024-10-14 14:42:22.897360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.305 [2024-10-14 14:42:22.900895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.305 [2024-10-14 14:42:22.910088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.305 [2024-10-14 14:42:22.910697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.305 [2024-10-14 14:42:22.910735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.305 [2024-10-14 14:42:22.910746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.305 [2024-10-14 14:42:22.910984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.305 [2024-10-14 14:42:22.911214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.305 [2024-10-14 14:42:22.911224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.305 [2024-10-14 14:42:22.911232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.305 [2024-10-14 14:42:22.914775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.305 [2024-10-14 14:42:22.923961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.305 [2024-10-14 14:42:22.924658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.305 [2024-10-14 14:42:22.924696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.305 [2024-10-14 14:42:22.924706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.305 [2024-10-14 14:42:22.924949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.305 [2024-10-14 14:42:22.925179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.305 [2024-10-14 14:42:22.925189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.305 [2024-10-14 14:42:22.925197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.305 [2024-10-14 14:42:22.928740] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.305 [2024-10-14 14:42:22.937945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.305 [2024-10-14 14:42:22.938595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.305 [2024-10-14 14:42:22.938634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.305 [2024-10-14 14:42:22.938645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.305 [2024-10-14 14:42:22.938883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.305 [2024-10-14 14:42:22.939113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.305 [2024-10-14 14:42:22.939122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.305 [2024-10-14 14:42:22.939130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.305 [2024-10-14 14:42:22.942675] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.305 [2024-10-14 14:42:22.951867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.305 [2024-10-14 14:42:22.952556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.305 [2024-10-14 14:42:22.952593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.305 [2024-10-14 14:42:22.952604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.305 [2024-10-14 14:42:22.952842] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.305 [2024-10-14 14:42:22.953072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.305 [2024-10-14 14:42:22.953083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.305 [2024-10-14 14:42:22.953091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.305 [2024-10-14 14:42:22.956634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.305 [2024-10-14 14:42:22.965836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.305 [2024-10-14 14:42:22.966369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.305 [2024-10-14 14:42:22.966388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.305 [2024-10-14 14:42:22.966396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.305 [2024-10-14 14:42:22.966615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.305 [2024-10-14 14:42:22.966833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.305 [2024-10-14 14:42:22.966842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.305 [2024-10-14 14:42:22.966854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.305 [2024-10-14 14:42:22.970394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.305 [2024-10-14 14:42:22.979788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.305 [2024-10-14 14:42:22.980214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.305 [2024-10-14 14:42:22.980233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.305 [2024-10-14 14:42:22.980241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.305 [2024-10-14 14:42:22.980460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.305 [2024-10-14 14:42:22.980679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.305 [2024-10-14 14:42:22.980686] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.305 [2024-10-14 14:42:22.980693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.305 [2024-10-14 14:42:22.984230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.305 [2024-10-14 14:42:22.993611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.305 [2024-10-14 14:42:22.994300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.305 [2024-10-14 14:42:22.994338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.305 [2024-10-14 14:42:22.994349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.305 [2024-10-14 14:42:22.994588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.305 [2024-10-14 14:42:22.994810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.305 [2024-10-14 14:42:22.994819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.305 [2024-10-14 14:42:22.994826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.305 [2024-10-14 14:42:22.998377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.306 [2024-10-14 14:42:23.007557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.306 [2024-10-14 14:42:23.008182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.306 [2024-10-14 14:42:23.008220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.306 [2024-10-14 14:42:23.008232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.306 [2024-10-14 14:42:23.008471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.306 [2024-10-14 14:42:23.008693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.306 [2024-10-14 14:42:23.008703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.306 [2024-10-14 14:42:23.008710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.306 [2024-10-14 14:42:23.012262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.306 [2024-10-14 14:42:23.021444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.306 [2024-10-14 14:42:23.022134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.306 [2024-10-14 14:42:23.022172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.306 [2024-10-14 14:42:23.022185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.306 [2024-10-14 14:42:23.022425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.306 [2024-10-14 14:42:23.022647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.306 [2024-10-14 14:42:23.022655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.306 [2024-10-14 14:42:23.022663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.306 [2024-10-14 14:42:23.026217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.568 [2024-10-14 14:42:23.035404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.568 [2024-10-14 14:42:23.035962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.568 [2024-10-14 14:42:23.035982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.568 [2024-10-14 14:42:23.035990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.568 [2024-10-14 14:42:23.036215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.568 [2024-10-14 14:42:23.036434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.568 [2024-10-14 14:42:23.036443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.568 [2024-10-14 14:42:23.036451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.568 [2024-10-14 14:42:23.039985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.568 [2024-10-14 14:42:23.049371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.568 [2024-10-14 14:42:23.049888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.568 [2024-10-14 14:42:23.049927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.568 [2024-10-14 14:42:23.049937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.568 [2024-10-14 14:42:23.050183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.568 [2024-10-14 14:42:23.050406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.568 [2024-10-14 14:42:23.050415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.568 [2024-10-14 14:42:23.050424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.568 [2024-10-14 14:42:23.053969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.568 [2024-10-14 14:42:23.063168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.568 [2024-10-14 14:42:23.063823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.568 [2024-10-14 14:42:23.063861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.568 [2024-10-14 14:42:23.063872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.568 [2024-10-14 14:42:23.064120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.568 [2024-10-14 14:42:23.064348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.568 [2024-10-14 14:42:23.064356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.568 [2024-10-14 14:42:23.064364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.568 [2024-10-14 14:42:23.067911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.568 [2024-10-14 14:42:23.077090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.568 [2024-10-14 14:42:23.077740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.568 [2024-10-14 14:42:23.077779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.568 [2024-10-14 14:42:23.077789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.568 [2024-10-14 14:42:23.078027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.568 [2024-10-14 14:42:23.078257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.568 [2024-10-14 14:42:23.078267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.568 [2024-10-14 14:42:23.078274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.568 [2024-10-14 14:42:23.081822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.568 [2024-10-14 14:42:23.091006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.568 [2024-10-14 14:42:23.091613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.568 [2024-10-14 14:42:23.091632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.568 [2024-10-14 14:42:23.091640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.568 [2024-10-14 14:42:23.091859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.568 [2024-10-14 14:42:23.092084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.568 [2024-10-14 14:42:23.092093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.568 [2024-10-14 14:42:23.092100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.568 [2024-10-14 14:42:23.095637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.568 [2024-10-14 14:42:23.104809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.568 [2024-10-14 14:42:23.105332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.568 [2024-10-14 14:42:23.105371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.568 [2024-10-14 14:42:23.105382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.568 [2024-10-14 14:42:23.105620] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.568 [2024-10-14 14:42:23.105843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.568 [2024-10-14 14:42:23.105851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.568 [2024-10-14 14:42:23.105858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.568 [2024-10-14 14:42:23.109413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.568 [2024-10-14 14:42:23.118612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.568 [2024-10-14 14:42:23.119374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.568 [2024-10-14 14:42:23.119412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.568 [2024-10-14 14:42:23.119422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.568 [2024-10-14 14:42:23.119661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.568 [2024-10-14 14:42:23.119883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.568 [2024-10-14 14:42:23.119892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.568 [2024-10-14 14:42:23.119900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.568 [2024-10-14 14:42:23.123452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.568 [2024-10-14 14:42:23.132428] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.569 [2024-10-14 14:42:23.132998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.569 [2024-10-14 14:42:23.133037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.569 [2024-10-14 14:42:23.133049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.569 [2024-10-14 14:42:23.133296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.569 [2024-10-14 14:42:23.133520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.569 [2024-10-14 14:42:23.133528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.569 [2024-10-14 14:42:23.133536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.569 [2024-10-14 14:42:23.137084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.569 [2024-10-14 14:42:23.146286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.569 [2024-10-14 14:42:23.146863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.569 [2024-10-14 14:42:23.146900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.569 [2024-10-14 14:42:23.146911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.569 [2024-10-14 14:42:23.147158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.569 [2024-10-14 14:42:23.147381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.569 [2024-10-14 14:42:23.147390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.569 [2024-10-14 14:42:23.147398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.569 [2024-10-14 14:42:23.150942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.569 [2024-10-14 14:42:23.160146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.569 [2024-10-14 14:42:23.160829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.569 [2024-10-14 14:42:23.160867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.569 [2024-10-14 14:42:23.160883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.569 [2024-10-14 14:42:23.161131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.569 [2024-10-14 14:42:23.161355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.569 [2024-10-14 14:42:23.161364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.569 [2024-10-14 14:42:23.161372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.569 [2024-10-14 14:42:23.164917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.569 [2024-10-14 14:42:23.174115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.569 [2024-10-14 14:42:23.174653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.569 [2024-10-14 14:42:23.174691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.569 [2024-10-14 14:42:23.174702] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.569 [2024-10-14 14:42:23.174941] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.569 [2024-10-14 14:42:23.175171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.569 [2024-10-14 14:42:23.175181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.569 [2024-10-14 14:42:23.175188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.569 [2024-10-14 14:42:23.178729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.569 [2024-10-14 14:42:23.187934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.569 [2024-10-14 14:42:23.188502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.569 [2024-10-14 14:42:23.188522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.569 [2024-10-14 14:42:23.188530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.569 [2024-10-14 14:42:23.188750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.569 [2024-10-14 14:42:23.188971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.569 [2024-10-14 14:42:23.188980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.569 [2024-10-14 14:42:23.188987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.569 [2024-10-14 14:42:23.192530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.569 [2024-10-14 14:42:23.201927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.569 [2024-10-14 14:42:23.202471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.569 [2024-10-14 14:42:23.202487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.569 [2024-10-14 14:42:23.202494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.569 [2024-10-14 14:42:23.202713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.569 [2024-10-14 14:42:23.202937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.569 [2024-10-14 14:42:23.202946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.569 [2024-10-14 14:42:23.202953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.569 [2024-10-14 14:42:23.206497] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.569 [2024-10-14 14:42:23.215892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.569 [2024-10-14 14:42:23.216319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.569 [2024-10-14 14:42:23.216335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.569 [2024-10-14 14:42:23.216343] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.569 [2024-10-14 14:42:23.216561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.569 [2024-10-14 14:42:23.216779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.569 [2024-10-14 14:42:23.216788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.569 [2024-10-14 14:42:23.216795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.569 [2024-10-14 14:42:23.220337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.569 [2024-10-14 14:42:23.229723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.569 [2024-10-14 14:42:23.230351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.569 [2024-10-14 14:42:23.230388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.569 [2024-10-14 14:42:23.230399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.569 [2024-10-14 14:42:23.230638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.569 [2024-10-14 14:42:23.230860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.569 [2024-10-14 14:42:23.230869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.569 [2024-10-14 14:42:23.230878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.569 [2024-10-14 14:42:23.234432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.569 [2024-10-14 14:42:23.243629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.569 [2024-10-14 14:42:23.244368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.569 [2024-10-14 14:42:23.244406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.569 [2024-10-14 14:42:23.244417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.569 [2024-10-14 14:42:23.244655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.569 [2024-10-14 14:42:23.244878] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.569 [2024-10-14 14:42:23.244886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.569 [2024-10-14 14:42:23.244894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.569 [2024-10-14 14:42:23.248446] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.569 [2024-10-14 14:42:23.257440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.569 [2024-10-14 14:42:23.258126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.569 [2024-10-14 14:42:23.258164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.569 [2024-10-14 14:42:23.258175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.569 [2024-10-14 14:42:23.258414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.569 [2024-10-14 14:42:23.258646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.569 [2024-10-14 14:42:23.258656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.569 [2024-10-14 14:42:23.258664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.569 [2024-10-14 14:42:23.262217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.569 [2024-10-14 14:42:23.271409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.569 [2024-10-14 14:42:23.272087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.569 [2024-10-14 14:42:23.272125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.569 [2024-10-14 14:42:23.272137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.569 [2024-10-14 14:42:23.272376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.569 [2024-10-14 14:42:23.272598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.569 [2024-10-14 14:42:23.272607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.569 [2024-10-14 14:42:23.272615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.570 [2024-10-14 14:42:23.276160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.570 [2024-10-14 14:42:23.285363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.570 [2024-10-14 14:42:23.286043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.570 [2024-10-14 14:42:23.286088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.570 [2024-10-14 14:42:23.286099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.570 [2024-10-14 14:42:23.286337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.570 [2024-10-14 14:42:23.286560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.570 [2024-10-14 14:42:23.286568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.570 [2024-10-14 14:42:23.286576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.570 [2024-10-14 14:42:23.290122] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.832 [2024-10-14 14:42:23.299315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.832 [2024-10-14 14:42:23.299916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.832 [2024-10-14 14:42:23.299936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.832 [2024-10-14 14:42:23.299950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.832 [2024-10-14 14:42:23.300177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.832 [2024-10-14 14:42:23.300396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.832 [2024-10-14 14:42:23.300404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.832 [2024-10-14 14:42:23.300412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.832 [2024-10-14 14:42:23.303950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.832 [2024-10-14 14:42:23.313132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.832 [2024-10-14 14:42:23.313780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.832 [2024-10-14 14:42:23.313818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.832 [2024-10-14 14:42:23.313830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.832 [2024-10-14 14:42:23.314078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.832 [2024-10-14 14:42:23.314302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.832 [2024-10-14 14:42:23.314313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.832 [2024-10-14 14:42:23.314322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.832 [2024-10-14 14:42:23.317867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.832 [2024-10-14 14:42:23.327072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.832 [2024-10-14 14:42:23.327778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.832 [2024-10-14 14:42:23.327816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.832 [2024-10-14 14:42:23.327828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.832 [2024-10-14 14:42:23.328075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.832 [2024-10-14 14:42:23.328298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.832 [2024-10-14 14:42:23.328307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.832 [2024-10-14 14:42:23.328315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.832 [2024-10-14 14:42:23.331861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.832 [2024-10-14 14:42:23.341059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.832 [2024-10-14 14:42:23.341707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.832 [2024-10-14 14:42:23.341745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.832 [2024-10-14 14:42:23.341755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.832 [2024-10-14 14:42:23.341994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.832 [2024-10-14 14:42:23.342225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.832 [2024-10-14 14:42:23.342239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.832 [2024-10-14 14:42:23.342247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.832 [2024-10-14 14:42:23.345792] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.832 [2024-10-14 14:42:23.354980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.832 [2024-10-14 14:42:23.355689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.832 [2024-10-14 14:42:23.355727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.832 [2024-10-14 14:42:23.355738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.832 [2024-10-14 14:42:23.355977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.832 [2024-10-14 14:42:23.356211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.832 [2024-10-14 14:42:23.356221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.832 [2024-10-14 14:42:23.356229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.832 [2024-10-14 14:42:23.359785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.832 [2024-10-14 14:42:23.368773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.832 [2024-10-14 14:42:23.369315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.832 [2024-10-14 14:42:23.369334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.832 [2024-10-14 14:42:23.369342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.832 [2024-10-14 14:42:23.369562] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.832 [2024-10-14 14:42:23.369780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.832 [2024-10-14 14:42:23.369788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.832 [2024-10-14 14:42:23.369795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.832 [2024-10-14 14:42:23.373341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.832 [2024-10-14 14:42:23.382741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.832 [2024-10-14 14:42:23.383345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.832 [2024-10-14 14:42:23.383383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.832 [2024-10-14 14:42:23.383394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.832 [2024-10-14 14:42:23.383633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.832 [2024-10-14 14:42:23.383856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.832 [2024-10-14 14:42:23.383865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.832 [2024-10-14 14:42:23.383872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.832 [2024-10-14 14:42:23.387428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.832 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:42.832 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:42.832 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:42.832 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:42.832 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.832 [2024-10-14 14:42:23.396619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.832 [2024-10-14 14:42:23.397106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.832 [2024-10-14 14:42:23.397131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.832 [2024-10-14 14:42:23.397140] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.832 [2024-10-14 14:42:23.397364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.833 [2024-10-14 14:42:23.397585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.833 [2024-10-14 14:42:23.397593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.833 [2024-10-14 14:42:23.397600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.833 [2024-10-14 14:42:23.401218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.833 [2024-10-14 14:42:23.410409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.833 [2024-10-14 14:42:23.411040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.833 [2024-10-14 14:42:23.411088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.833 [2024-10-14 14:42:23.411100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.833 [2024-10-14 14:42:23.411340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.833 [2024-10-14 14:42:23.411563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.833 [2024-10-14 14:42:23.411571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.833 [2024-10-14 14:42:23.411579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.833 [2024-10-14 14:42:23.415128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.833 [2024-10-14 14:42:23.424319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.833 [2024-10-14 14:42:23.424883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.833 [2024-10-14 14:42:23.424901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.833 [2024-10-14 14:42:23.424909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.833 [2024-10-14 14:42:23.425135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.833 [2024-10-14 14:42:23.425356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.833 [2024-10-14 14:42:23.425372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.833 [2024-10-14 14:42:23.425380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.833 [2024-10-14 14:42:23.428925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.833 [2024-10-14 14:42:23.438122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.833 [2024-10-14 14:42:23.438765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.833 [2024-10-14 14:42:23.438803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.833 [2024-10-14 14:42:23.438813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.833 [2024-10-14 14:42:23.439051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.833 [2024-10-14 14:42:23.439282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.833 [2024-10-14 14:42:23.439292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.833 [2024-10-14 14:42:23.439300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.833 [2024-10-14 14:42:23.439735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.833 [2024-10-14 14:42:23.442847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.833 [2024-10-14 14:42:23.452034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.833 [2024-10-14 14:42:23.452696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.833 [2024-10-14 14:42:23.452734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.833 [2024-10-14 14:42:23.452745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.833 [2024-10-14 14:42:23.452983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.833 [2024-10-14 14:42:23.453214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.833 [2024-10-14 14:42:23.453223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.833 [2024-10-14 14:42:23.453231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.833 [2024-10-14 14:42:23.456776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.833 [2024-10-14 14:42:23.465982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.833 [2024-10-14 14:42:23.466658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.833 [2024-10-14 14:42:23.466697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.833 [2024-10-14 14:42:23.466708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.833 [2024-10-14 14:42:23.466946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.833 [2024-10-14 14:42:23.467177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.833 [2024-10-14 14:42:23.467191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.833 [2024-10-14 14:42:23.467199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.833 Malloc0 00:28:42.833 [2024-10-14 14:42:23.470743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.833 [2024-10-14 14:42:23.479947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.833 [2024-10-14 14:42:23.480476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.833 [2024-10-14 14:42:23.480514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.833 [2024-10-14 14:42:23.480525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.833 [2024-10-14 14:42:23.480763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.833 [2024-10-14 14:42:23.480985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.833 [2024-10-14 14:42:23.480994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.833 [2024-10-14 14:42:23.481002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.833 [2024-10-14 14:42:23.484557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.833 [2024-10-14 14:42:23.493752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.833 [2024-10-14 14:42:23.494454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.833 [2024-10-14 14:42:23.494492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac6100 with addr=10.0.0.2, port=4420 00:28:42.833 [2024-10-14 14:42:23.494503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6100 is same with the state(6) to be set 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.833 [2024-10-14 14:42:23.494741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac6100 (9): Bad file descriptor 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:42.833 [2024-10-14 14:42:23.494964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.833 [2024-10-14 14:42:23.494973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.833 [2024-10-14 14:42:23.494981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.833 [2024-10-14 14:42:23.498533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.833 [2024-10-14 14:42:23.501711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.833 14:42:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3570147 00:28:42.833 [2024-10-14 14:42:23.507731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:43.094 [2024-10-14 14:42:23.702516] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:44.298 4634.86 IOPS, 18.10 MiB/s [2024-10-14T12:42:25.967Z] 5442.88 IOPS, 21.26 MiB/s [2024-10-14T12:42:26.909Z] 6085.44 IOPS, 23.77 MiB/s [2024-10-14T12:42:27.974Z] 6617.80 IOPS, 25.85 MiB/s [2024-10-14T12:42:28.933Z] 7063.00 IOPS, 27.59 MiB/s [2024-10-14T12:42:29.877Z] 7417.33 IOPS, 28.97 MiB/s [2024-10-14T12:42:31.261Z] 7728.85 IOPS, 30.19 MiB/s [2024-10-14T12:42:32.203Z] 7972.07 IOPS, 31.14 MiB/s 00:28:51.476 Latency(us) 00:28:51.477 [2024-10-14T12:42:32.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.477 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:51.477 Verification LBA range: start 0x0 length 0x4000 00:28:51.477 Nvme1n1 : 15.01 8200.98 32.04 10169.16 0.00 6942.77 798.72 14854.83 00:28:51.477 [2024-10-14T12:42:32.204Z] =================================================================================================================== 00:28:51.477 [2024-10-14T12:42:32.204Z] Total : 8200.98 32.04 10169.16 0.00 6942.77 798.72 14854.83 00:28:51.477 14:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:51.477 14:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:51.477 14:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.477 14:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.477 14:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.477 14:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:51.477 14:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:51.477 14:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:51.477 14:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:51.477 14:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.477 14:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:51.477 14:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.477 14:42:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.477 rmmod nvme_tcp 00:28:51.477 rmmod nvme_fabrics 00:28:51.477 rmmod nvme_keyring 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 3571331 ']' 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 3571331 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3571331 ']' 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3571331 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3571331 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3571331' 00:28:51.477 killing process with pid 3571331 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3571331 00:28:51.477 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3571331 00:28:51.738 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:51.738 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:51.738 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:51.738 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:51.738 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:28:51.738 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:51.738 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:28:51.738 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.738 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.738 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.738 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.738 14:42:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.651 14:42:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.651 00:28:53.651 real 0m28.263s 00:28:53.651 user 1m3.263s 00:28:53.651 sys 0m7.559s 00:28:53.651 14:42:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:53.651 14:42:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:53.651 ************************************ 00:28:53.651 END TEST nvmf_bdevperf 00:28:53.651 ************************************ 00:28:53.651 14:42:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:53.651 14:42:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:53.651 14:42:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:53.651 14:42:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.912 ************************************ 00:28:53.912 START TEST nvmf_target_disconnect 00:28:53.912 ************************************ 00:28:53.912 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:53.912 * Looking for test storage... 00:28:53.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:53.912 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:53.912 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:28:53.912 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:53.912 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:53.912 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:53.912 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:53.912 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:53.912 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:53.912 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:53.912 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:53.912 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:53.912 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:53.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.913 --rc genhtml_branch_coverage=1 00:28:53.913 --rc genhtml_function_coverage=1 00:28:53.913 --rc genhtml_legend=1 00:28:53.913 --rc geninfo_all_blocks=1 00:28:53.913 --rc geninfo_unexecuted_blocks=1 00:28:53.913 00:28:53.913 ' 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:53.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.913 --rc genhtml_branch_coverage=1 00:28:53.913 --rc genhtml_function_coverage=1 00:28:53.913 --rc genhtml_legend=1 00:28:53.913 --rc geninfo_all_blocks=1 00:28:53.913 --rc geninfo_unexecuted_blocks=1 00:28:53.913 00:28:53.913 ' 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:53.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.913 --rc genhtml_branch_coverage=1 00:28:53.913 --rc genhtml_function_coverage=1 00:28:53.913 --rc genhtml_legend=1 00:28:53.913 --rc geninfo_all_blocks=1 00:28:53.913 --rc geninfo_unexecuted_blocks=1 00:28:53.913 00:28:53.913 ' 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:53.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.913 --rc genhtml_branch_coverage=1 00:28:53.913 --rc genhtml_function_coverage=1 00:28:53.913 --rc genhtml_legend=1 00:28:53.913 --rc geninfo_all_blocks=1 00:28:53.913 --rc geninfo_unexecuted_blocks=1 00:28:53.913 00:28:53.913 ' 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:53.913 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:54.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:54.175 14:42:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.320 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:02.320 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:02.321 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:02.321 Found net devices under 0000:31:00.0: cvl_0_0 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:02.321 Found net devices under 0000:31:00.1: cvl_0_1 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:02.321 14:42:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:02.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:02.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:29:02.321 00:29:02.321 --- 10.0.0.2 ping statistics --- 00:29:02.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.321 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:02.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:02.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:29:02.321 00:29:02.321 --- 10.0.0.1 ping statistics --- 00:29:02.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:02.321 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:02.321 ************************************ 00:29:02.321 START TEST nvmf_target_disconnect_tc1 00:29:02.321 ************************************ 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:02.321 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.321 [2024-10-14 14:42:42.345017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.321 [2024-10-14 14:42:42.345088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c8dc0 with addr=10.0.0.2, port=4420 00:29:02.321 [2024-10-14 14:42:42.345121] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:02.321 [2024-10-14 14:42:42.345133] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:02.321 [2024-10-14 14:42:42.345142] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:02.321 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:02.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:02.321 Initializing NVMe Controllers 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:02.322 00:29:02.322 real 0m0.117s 00:29:02.322 user 0m0.052s 00:29:02.322 sys 0m0.065s 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:02.322 ************************************ 00:29:02.322 END TEST nvmf_target_disconnect_tc1 00:29:02.322 ************************************ 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:02.322 ************************************ 00:29:02.322 START TEST nvmf_target_disconnect_tc2 00:29:02.322 ************************************ 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3577575 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3577575 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3577575 ']' 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:02.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:02.322 14:42:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.322 [2024-10-14 14:42:42.507214] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:29:02.322 [2024-10-14 14:42:42.507275] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:02.322 [2024-10-14 14:42:42.598282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:02.322 [2024-10-14 14:42:42.650085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.322 [2024-10-14 14:42:42.650134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.322 [2024-10-14 14:42:42.650143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.322 [2024-10-14 14:42:42.650150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.322 [2024-10-14 14:42:42.650157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.322 [2024-10-14 14:42:42.652279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:02.322 [2024-10-14 14:42:42.652517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:02.322 [2024-10-14 14:42:42.652688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:02.322 [2024-10-14 14:42:42.652719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.894 Malloc0 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.894 [2024-10-14 14:42:43.430238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.894 [2024-10-14 14:42:43.470645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3577633 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:02.894 14:42:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.810 14:42:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3577575 00:29:04.810 14:42:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Write completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Write completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Write completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Write completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Write completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Write completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Write completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.810 Read completed with error (sct=0, sc=8) 00:29:04.810 starting I/O failed 00:29:04.811 Read completed with error (sct=0, sc=8) 00:29:04.811 starting I/O failed 00:29:04.811 Write completed with error (sct=0, sc=8) 00:29:04.811 starting I/O failed 00:29:04.811 Write completed with error (sct=0, sc=8) 00:29:04.811 starting I/O failed 00:29:04.811 Write completed with error (sct=0, sc=8) 00:29:04.811 starting I/O failed 00:29:04.811 Read completed with error (sct=0, sc=8) 00:29:04.811 starting I/O failed 00:29:04.811 Read completed with error (sct=0, sc=8) 00:29:04.811 starting I/O failed 00:29:04.811 Write completed with error (sct=0, sc=8) 00:29:04.811 starting I/O failed 00:29:04.811 Write completed with error (sct=0, sc=8) 00:29:04.811 starting I/O failed 00:29:04.811 [2024-10-14 14:42:45.503956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.811 [2024-10-14 14:42:45.504407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.504450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.504780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.504792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.505013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.505023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.505444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.505482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.505733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.505745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.505936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.505948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.506327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.506338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.506508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.506518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.506812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.506823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.507148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.507160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.507516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.507526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.507738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.507749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.508029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.508039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.508234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.508245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.508456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.508466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.508690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.508701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.508877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.508889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.509201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.509213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.509524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.509535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.509833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.509843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.510160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.510171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.510359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.510371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.510674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.510685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.510957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.510968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.511183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.511194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.511505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.511516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.511813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.511823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.512147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.512158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.512442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.512453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.512636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.512648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.512949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.512963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.513264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.513275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.513579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.513591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.513930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.513941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.514364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.514375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.514676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.514687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.514996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.811 [2024-10-14 14:42:45.515007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.811 qpair failed and we were unable to recover it. 00:29:04.811 [2024-10-14 14:42:45.515420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.515431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.515622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.515633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.515888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.515899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.516233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.516244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.516563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.516574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.516854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.516865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.517165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.517176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.517382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.517394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.517648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.517659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.517852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.517863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.518173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.518185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.518496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.518507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.518623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.518633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.518920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.518930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.519238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.519249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.519549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.519559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.519850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.519859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.520073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.520084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.520195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.520205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.520501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.520511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.520817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.520829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.521117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.521128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.521441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.521451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.521655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.521665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.521885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.521895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.522206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.522217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.522482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.522491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.522775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.522784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.523051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.523065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.523362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.523372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.523527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.523537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.523615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.523624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.523938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.523949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.524230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.524240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.524624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.524634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.524950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.524960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.525254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.525265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.525556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.525567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.812 [2024-10-14 14:42:45.525864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.812 [2024-10-14 14:42:45.525875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.812 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.526215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.526225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.526555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.526566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.526888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.526897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.527199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.527209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.527510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.527520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.527825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.527834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.528121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.528131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.528428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.528439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.528758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.528768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.529060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.529087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.529400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.529410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.529713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.529723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.530054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.530069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.530439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.530449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.530761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.530772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.531056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.531071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.531392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.531402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.531690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.531699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.531995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.532011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.532310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.532320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.532606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.532616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.532903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.532913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.533230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.533244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.533577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.533587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.533783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.533793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.534084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.534095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.534445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.534456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.534636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.534646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.534913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.534923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.535245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.535255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.535560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.535569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.535901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.535911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.536212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.536222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.536544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.536553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.536868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.536878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.537253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.537264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.537557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.537567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.537854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.537864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.538135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.538145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:04.813 [2024-10-14 14:42:45.538458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.813 [2024-10-14 14:42:45.538468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:04.813 qpair failed and we were unable to recover it. 00:29:05.086 [2024-10-14 14:42:45.538716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.086 [2024-10-14 14:42:45.538727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.086 qpair failed and we were unable to recover it. 00:29:05.086 [2024-10-14 14:42:45.539025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.086 [2024-10-14 14:42:45.539036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.086 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.539315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.539325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.539619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.539629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.539907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.539917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.540108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.540120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.540330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.540340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.540640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.540650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.540985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.540996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.541315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.541328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.541623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.541633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.541968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.541979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.542358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.542369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.542660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.542670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.543003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.543014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.543322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.543333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.543701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.543711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.544014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.544024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.544252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.544263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.544519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.544530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.544854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.544865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.545186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.545196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.545488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.545498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.545779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.545789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.546691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.546714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.547016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.547027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.547336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.547346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.547713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.547723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.547905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.547914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.548091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.548102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.548284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.548294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.548556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.548565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.548858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.548868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.549175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.549185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.549524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.549534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.549726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.549735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.550010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.550020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.550335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.550345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.550638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.550648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.550978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.550988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.551280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.551298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.087 [2024-10-14 14:42:45.551473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.087 [2024-10-14 14:42:45.551483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.087 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.551787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.551801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.552084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.552094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.552399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.552409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.552711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.552721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.553028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.553040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.553195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.553206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.553384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.553394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.553692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.553702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.554034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.554047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.554214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.554225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.554509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.554519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.554818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.554828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.555165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.555175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.555462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.555473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.555798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.555808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.556113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.556124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.556403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.556412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.556662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.556671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.556966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.556976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.557300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.557310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.557671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.557680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.557762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.557772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Write completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Write completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Write completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Write completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Write completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Write completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Write completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Write completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Write completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 Read completed with error (sct=0, sc=8) 00:29:05.088 starting I/O failed 00:29:05.088 [2024-10-14 14:42:45.558311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:05.088 [2024-10-14 14:42:45.558704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.558746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe638000b90 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.559052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.559069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.559462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.559471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.559764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.559774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.560027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.560036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.560324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.560334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.560627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.088 [2024-10-14 14:42:45.560637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.088 qpair failed and we were unable to recover it. 00:29:05.088 [2024-10-14 14:42:45.560869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.560879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.561111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.561122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.561311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.561320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.561653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.561663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.561974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.561984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.562302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.562312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.562622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.562632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.562869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.562880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.563180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.563190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.563520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.563530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.563823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.563833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.564121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.564132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.564444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.564454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.564761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.564771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.565104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.565116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.565397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.565407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.565599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.565609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.565917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.565927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.566228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.566238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.566490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.566500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.566804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.566814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.567142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.567152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.567443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.567452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.567733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.567743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.567996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.568006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.568307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.568317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.568689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.568699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.569025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.569035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.569341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.569352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.569633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.569642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.569947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.569957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.570248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.570258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.570565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.570575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.570744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.570755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.571069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.571079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.571362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.571372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.571706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.571716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.572018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.572028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.572403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.572414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.572719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.572729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.572893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.089 [2024-10-14 14:42:45.572905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.089 qpair failed and we were unable to recover it. 00:29:05.089 [2024-10-14 14:42:45.573248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.573259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.573570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.573580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.573765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.573777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.573979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.573990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.574292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.574302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.574605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.574615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.574913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.574923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.575214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.575227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.575432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.575442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.575762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.575772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.576051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.576061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.576393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.576403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.576680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.576689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.576998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.577008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.577322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.577332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.577658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.577668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.577870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.577880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.578184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.578195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.578482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.578491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.578768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.578778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.579088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.579099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.579472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.579482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.579805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.579816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.580119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.580130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.580436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.580446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.580751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.580761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.581068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.581078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.581392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.581402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.581709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.581720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.582027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.582037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.582271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.582281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.582592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.582602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.582904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.582913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.583188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.583199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.583507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.583518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.583823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.583834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.584162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.584172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.584475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.584485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.584796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.584805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.585093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.585103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.585376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.090 [2024-10-14 14:42:45.585388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.090 qpair failed and we were unable to recover it. 00:29:05.090 [2024-10-14 14:42:45.585664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.585674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.585879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.585890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.586210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.586221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.586523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.586533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.586858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.586868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.587149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.587160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.587463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.587473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.587793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.587803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.587976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.587987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.588309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.588320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.588600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.588610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.588931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.588941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.589288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.589298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.589589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.589600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.589904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.589915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.590226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.590237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.590531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.590541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.590841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.590851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.591156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.591166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.591501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.591512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.591795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.591805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.592126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.592136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.592438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.592448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.592638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.592647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.592830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.592840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.593127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.593138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.593303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.593316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.593643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.593653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.593932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.593942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.594355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.594366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.594669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.594678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.594974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.594984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.595149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.595160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.595532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.091 [2024-10-14 14:42:45.595542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.091 qpair failed and we were unable to recover it. 00:29:05.091 [2024-10-14 14:42:45.595872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.595882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.596213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.596224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.596527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.596538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.596836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.596845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.597147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.597158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.597486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.597495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.597782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.597793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.597984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.597993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.598308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.598320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.598645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.598655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.598824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.598835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.599189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.599200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.599489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.599499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.599809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.599819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.600124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.600134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.600470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.600481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.600718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.600728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.601097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.601107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.601432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.601442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.601754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.601764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.602076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.602087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.602424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.602434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.602739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.602748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.603047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.603058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.603342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.603352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.603657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.603667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.603974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.603984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.604282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.604292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.604501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.604512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.604805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.604815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.605140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.605151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.605460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.605470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.605658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.605668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.605974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.605987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.606309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.606320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.606629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.606639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.606922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.606933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.607229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.607239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.607552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.607563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.607855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.607864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.608151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.092 [2024-10-14 14:42:45.608162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.092 qpair failed and we were unable to recover it. 00:29:05.092 [2024-10-14 14:42:45.608479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.608489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.608795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.608805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.609115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.609126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.609331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.609341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.609616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.609625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.609929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.609939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.610136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.610147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.610470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.610479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.610802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.610812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.610994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.611004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.611307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.611318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.611633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.611643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.611968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.611978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.612283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.612293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.612570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.612581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.612890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.612900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.613206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.613217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.613532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.613550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.613884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.613895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.614123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.614136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.614447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.614457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.614739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.614749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.615069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.615079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.615378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.615389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.615732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.615742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.616083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.616093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.616507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.616517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.616805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.616821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.617088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.617098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.617326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.617336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.617638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.617647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.618044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.618055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.618400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.618410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.618741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.618751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.619033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.619051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.619351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.619362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.619550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.619560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.619881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.619891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.620204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.620215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.620529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.620539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.093 [2024-10-14 14:42:45.620821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.093 [2024-10-14 14:42:45.620831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.093 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.621001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.621012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.621303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.621312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.621527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.621537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.621840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.621850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.622141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.622151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.622468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.622478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.622654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.622665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.622949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.622960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.623327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.623337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.623613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.623623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.623798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.623809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.624194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.624206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.624424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.624433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.624653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.624664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.624946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.624956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.625121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.625131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.625453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.625463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.625766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.625776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.626079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.626090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.626405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.626417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.626719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.626728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.627041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.627051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.627431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.627441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.627747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.627757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.628133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.628144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.628457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.628467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.628668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.628679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.628998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.629008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.629302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.629313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.629622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.629632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.629925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.629943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.630276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.630286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.630555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.630566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.630874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.630885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.631164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.631174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.631341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.631351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.631655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.631664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.631999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.632008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.632314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.632324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.632590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.632601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.094 [2024-10-14 14:42:45.632966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.094 [2024-10-14 14:42:45.632976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.094 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.633262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.633272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.633572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.633581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.633860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.633870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.634097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.634107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.634398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.634409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.634726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.634738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.635044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.635054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.635407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.635417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.635753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.635763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.636141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.636151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.636441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.636451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.636736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.636746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.637058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.637073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.637381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.637391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.637706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.637717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.638016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.638027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.638329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.638339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.638539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.638549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.638827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.638837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.639165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.639176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.639467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.639485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.639817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.639827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.640027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.640037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.640370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.640380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.640684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.640694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.641003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.641014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.641204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.641214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.641377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.641386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.641669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.641679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.641868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.641878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.642206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.642216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.642510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.642520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.642816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.642827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.643143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.643154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.643467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.095 [2024-10-14 14:42:45.643477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.095 qpair failed and we were unable to recover it. 00:29:05.095 [2024-10-14 14:42:45.643810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.643820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.644045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.644054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.644400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.644410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.644692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.644702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.645009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.645018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.645332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.645343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.645681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.645691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.645997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.646007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.646321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.646331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.646601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.646610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.646930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.646940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.647271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.647284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.647638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.647649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.647959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.647968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.648276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.648287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.648571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.648581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.648887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.648896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.649208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.649218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.649500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.649510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.649822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.649832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.650107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.650117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.650424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.650433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.650737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.650747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.651054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.651072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.651290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.651300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.651615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.651624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.651813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.651823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.652119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.652130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.652440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.652450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.652773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.652783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.653116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.653126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.653442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.653452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.653759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.653769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.654069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.654080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.654399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.654409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.654574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.654585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.654889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.654899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.655212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.655222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.655513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.655523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.655708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.655718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.655898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.096 [2024-10-14 14:42:45.655909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.096 qpair failed and we were unable to recover it. 00:29:05.096 [2024-10-14 14:42:45.656210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.656220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.656554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.656564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.656846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.656864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.657178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.657188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.657552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.657562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.657834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.657844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.658164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.658175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.658499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.658508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.658814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.658824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.659150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.659160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.659438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.659447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.659762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.659773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.660054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.660067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.660381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.660391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.660700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.660710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.661017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.661027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.661238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.661249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.661565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.661575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.661887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.661897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.662204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.662216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.662557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.662568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.662875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.662885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.663192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.663202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.663544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.663554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.663857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.663867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.664194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.664206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.664513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.664523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.664818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.664829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.665191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.665201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.665484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.665495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.665834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.665845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.666140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.666151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.666540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.666550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.666754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.666764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.667056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.667070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.667265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.667274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.667534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.667544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.667863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.667873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.668026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.668040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.668269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.668280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.668608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.097 [2024-10-14 14:42:45.668618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.097 qpair failed and we were unable to recover it. 00:29:05.097 [2024-10-14 14:42:45.668777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.668787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.669020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.669031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.669355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.669365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.669676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.669686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.670000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.670011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.670307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.670318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.670593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.670603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.670799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.670809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.671109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.671119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.671415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.671425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.671729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.671739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.671951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.671962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.672147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.672157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.672483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.672493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.672823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.672835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.673142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.673153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.673481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.673491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.673818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.673828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.674113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.674124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.674435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.674446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.674725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.674735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.674938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.674949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.675175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.675186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.675449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.675459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.675780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.675790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.676071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.676082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.676378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.676388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.677185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.677207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.677526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.677538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.677826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.677836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.678127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.678137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.678424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.678435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.678725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.678736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.679041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.679051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.679358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.679368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.680271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.680291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.680614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.680626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.680933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.680943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.681271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.681285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.681591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.681602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.681913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.098 [2024-10-14 14:42:45.681924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.098 qpair failed and we were unable to recover it. 00:29:05.098 [2024-10-14 14:42:45.682262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.682273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.682500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.682510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.682810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.682821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.683114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.683125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.683486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.683495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.683841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.683852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.684023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.684033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.684400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.684410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.684719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.684729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.684926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.684936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.685224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.685234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.685438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.685449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.685657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.685667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.685988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.685998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.686239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.686249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.686597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.686608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.686923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.686933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.687247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.687258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.687592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.687603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.687931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.687941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.688233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.688244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.688547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.688558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.688788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.688799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.689095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.689106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.689416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.689428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.689729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.689739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.690046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.690056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.690370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.690381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.690687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.690697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.691006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.691016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.691315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.691326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.691522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.691532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.691960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.691970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.692277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.692287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.692481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.692492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.692780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.692790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.099 [2024-10-14 14:42:45.693077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.099 [2024-10-14 14:42:45.693087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.099 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.693396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.693406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.693606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.693616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.693915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.693925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.694259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.694270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.694587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.694598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.694900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.694910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.695229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.695239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.695436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.695446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.695747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.695757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.696070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.696080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.696400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.696410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.696708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.696718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.697044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.697054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.697268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.697279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.697469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.697480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.697788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.697797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.698111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.698121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.698482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.698492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.698796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.698806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.699212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.699223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.699519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.699530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.699733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.699744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.700072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.700083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.700378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.700388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.700564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.700576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.700941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.700951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.701131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.701142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.701489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.701500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.701825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.701837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.702146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.702156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.702462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.702473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.702786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.702796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.703082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.703093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.703420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.703430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.703704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.703714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.704013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.704023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.704152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.704162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.704496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.704506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.704824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.704834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.705028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.100 [2024-10-14 14:42:45.705038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.100 qpair failed and we were unable to recover it. 00:29:05.100 [2024-10-14 14:42:45.705437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.705447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.705747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.705758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.706079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.706090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.706526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.706537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.706927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.706938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.707174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.707185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.707481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.707490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.707821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.707831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.708133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.708143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.708473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.708483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.708751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.708761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.709055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.709068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.709281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.709292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.709624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.709633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.709954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.709964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.710290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.710303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.710610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.710620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.710919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.710930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.711239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.711251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.711541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.711552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.711807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.711817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.712099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.712109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.712421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.712431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.712763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.712774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.713007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.713017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.713228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.713239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.713355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.713365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.713660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.713670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.713861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.713871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.714086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.714097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.714389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.714399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.714688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.714698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.715032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.715042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.715159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.715169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.715219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.715229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.715451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.715461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.715855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.715865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.716189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.716200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.716476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.716485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.716778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.716787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.717000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.101 [2024-10-14 14:42:45.717010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.101 qpair failed and we were unable to recover it. 00:29:05.101 [2024-10-14 14:42:45.717334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.717344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.717698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.717708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.717942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.717952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.718246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.718256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.718547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.718556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.718861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.718871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.719093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.719103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.719427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.719437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.719767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.719778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.720098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.720109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.720412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.720422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.720738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.720748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.721021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.721032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.721330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.721341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.721510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.721522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.721837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.721849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.722134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.722145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.722467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.722477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.722779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.722789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.723093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.723103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.723469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.723479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.723778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.723787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.724143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.724153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.724493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.724503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.724837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.724847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.725169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.725179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.725531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.725541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.725844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.725854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.726131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.726141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.726455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.726465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.726791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.726801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.727107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.727117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.727306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.727316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.727594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.727603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.727906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.727916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.728224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.728235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.728540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.728559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.728865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.728876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.729186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.729196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.729512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.729522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.102 [2024-10-14 14:42:45.729816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.102 [2024-10-14 14:42:45.729826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.102 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.730125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.730135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.730409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.730421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.730725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.730735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.730892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.730903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.731240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.731250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.731554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.731564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.731774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.731784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.732094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.732104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.732303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.732313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.732636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.732645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.732912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.732922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.733214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.733224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.733546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.733556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.733863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.733872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.734169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.734179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.734371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.734381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.734715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.734726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.735035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.735045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.735347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.735357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.735575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.735586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.735900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.735910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.736228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.736240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.736534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.736544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.736866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.736877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.737179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.737189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.737500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.737511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.737812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.737822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.738111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.738121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.738318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.738328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.738675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.738685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.738995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.739004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.739231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.739241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.739567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.739577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.739887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.739897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.740201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.740212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.740533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.740543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.740741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.740751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.741066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.741077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.741323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.741333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.741631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.741642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.103 [2024-10-14 14:42:45.741944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.103 [2024-10-14 14:42:45.741955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.103 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.742230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.742240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.742641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.742652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.742932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.742942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.743221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.743232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.743559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.743570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.743896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.743907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.744276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.744286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.744567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.744577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.744879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.744890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.745200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.745211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.745498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.745508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.745735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.745744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.746037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.746047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.746379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.746390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.746673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.746692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.747027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.747037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.747329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.747339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.747545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.747555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.747865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.747875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.748149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.748159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.748267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.748277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.748564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.748574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.748876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.748885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.749168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.749178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.749494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.749504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.749705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.749715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.749932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.749942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.750222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.750232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.750546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.750563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.750884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.750894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.751180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.751190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.751404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.751414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.751714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.751725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.752026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.752037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.752320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.752331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.752665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.104 [2024-10-14 14:42:45.752675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.104 qpair failed and we were unable to recover it. 00:29:05.104 [2024-10-14 14:42:45.752942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.752952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.753274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.753285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.753506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.753516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.753798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.753809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.754060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.754075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.754389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.754399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.754679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.754689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.754896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.754906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.755020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.755029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.755348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.755359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.755604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.755615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.755951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.755961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.756289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.756300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.756600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.756611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.756964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.756974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.757281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.757292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.757615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.757625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.757931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.757940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.758268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.758279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.758563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.758573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.758785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.758797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.759129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.759139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.759453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.759463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.759745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.759754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.760058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.760073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.760272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.760282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.760597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.760606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.760885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.760895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.761203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.761214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.761543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.761554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.761898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.761910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.762129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.762139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.762476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.762486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.762815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.762827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.763143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.763154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.763449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.763459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.763768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.763777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.764067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.764078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.764383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.764392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.764664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.764673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.105 [2024-10-14 14:42:45.764893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.105 [2024-10-14 14:42:45.764904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.105 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.765235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.765245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.765558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.765568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.765863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.765873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.766179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.766190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.766501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.766511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.766709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.766720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.767026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.767037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.767238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.767249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.767619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.767629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.767940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.767951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.768302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.768312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.768618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.768628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.768940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.768950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.769244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.769255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.769583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.769593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.769885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.769902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.770214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.770225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.770519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.770529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.770773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.770784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.771136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.771147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.771433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.771443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.771770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.771780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.772113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.772124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.772501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.772511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.772823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.772834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.773089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.773099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.773387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.773397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.773687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.773697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.773863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.773874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.774098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.774108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.774474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.774485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.774798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.774809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.775023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.775032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.775349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.775360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.775657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.775667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.775883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.775894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.776189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.776200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.776494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.776505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.776799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.776809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.777002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.777012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.106 [2024-10-14 14:42:45.777343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.106 [2024-10-14 14:42:45.777353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.106 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.777670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.777680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.778009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.778019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.778358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.778369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.778673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.778683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.779048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.779058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.779438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.779449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.779673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.779683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.779978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.779989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.780333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.780345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.780627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.780638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.780964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.780975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.781218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.781229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.781533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.781544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.781851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.781863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.782105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.782116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.782441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.782451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.782760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.782770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.783066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.783081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.783381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.783393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.783710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.783724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.784052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.784066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.784276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.784286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.784513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.784523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.784701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.784711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.785026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.785036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.785263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.785273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.785598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.785607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.785920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.785931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.786338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.786350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.786645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.786656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.786971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.786981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.787374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.787384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.787716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.787728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.788034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.788044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.788472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.788483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.788804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.788814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.788932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.788941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.789242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.789252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.789425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.789436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.789740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.789750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.107 [2024-10-14 14:42:45.790039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.107 [2024-10-14 14:42:45.790049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.107 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.790379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.790390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.790688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.790706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.791014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.791025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.791385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.791396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.791692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.791702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.792040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.792050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.792271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.792283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.792495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.792506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.792825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.792835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.793125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.793136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.793471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.793482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.793791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.793802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.794087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.794098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.794465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.794476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.794807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.794818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.795016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.795027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.795347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.795358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.795664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.795675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.795994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.796005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.796302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.796314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.796621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.796631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.796934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.796946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.797131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.797142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.797461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.797472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.797783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.797794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.798139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.798149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.798486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.798495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.798796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.798806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.799136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.799147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.799453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.799464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.799778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.799788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.800041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.800051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.800256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.800268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.800566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.800576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.800837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.800847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.801135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.108 [2024-10-14 14:42:45.801146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.108 qpair failed and we were unable to recover it. 00:29:05.108 [2024-10-14 14:42:45.801495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.109 [2024-10-14 14:42:45.801506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.109 qpair failed and we were unable to recover it. 00:29:05.109 [2024-10-14 14:42:45.801795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.109 [2024-10-14 14:42:45.801805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.109 qpair failed and we were unable to recover it. 00:29:05.109 [2024-10-14 14:42:45.802113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.109 [2024-10-14 14:42:45.802122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.109 qpair failed and we were unable to recover it. 00:29:05.109 [2024-10-14 14:42:45.802420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.109 [2024-10-14 14:42:45.802430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.109 qpair failed and we were unable to recover it. 00:29:05.109 [2024-10-14 14:42:45.802727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.109 [2024-10-14 14:42:45.802737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.109 qpair failed and we were unable to recover it. 00:29:05.109 [2024-10-14 14:42:45.803015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.109 [2024-10-14 14:42:45.803026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.109 qpair failed and we were unable to recover it. 00:29:05.109 [2024-10-14 14:42:45.803343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.109 [2024-10-14 14:42:45.803355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.109 qpair failed and we were unable to recover it. 00:29:05.109 [2024-10-14 14:42:45.803659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.109 [2024-10-14 14:42:45.803670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.109 qpair failed and we were unable to recover it. 00:29:05.109 [2024-10-14 14:42:45.803985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.109 [2024-10-14 14:42:45.803996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.109 qpair failed and we were unable to recover it. 00:29:05.109 [2024-10-14 14:42:45.804358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.109 [2024-10-14 14:42:45.804369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.109 qpair failed and we were unable to recover it. 00:29:05.109 [2024-10-14 14:42:45.804698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.109 [2024-10-14 14:42:45.804710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.109 qpair failed and we were unable to recover it. 00:29:05.384 [2024-10-14 14:42:45.805018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.805029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.805407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.805420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.805791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.805802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.806563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.806584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.806856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.806867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.807177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.807187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.807476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.807486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.807809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.807820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.808098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.808108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.808413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.808423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.808721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.808731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.809034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.809044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.809362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.809373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.809682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.809693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.809983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.809993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.810290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.810301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.810606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.810616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.810849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.810859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.810997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.811007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.811215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.811226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.811541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.811552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.811841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.811851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.812185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.812195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.812493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.812503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.812815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.812826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.813029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.813039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.813347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.813358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.813585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.813596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.813900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.813910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.814220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.814230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.814506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.814516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.814820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.814830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.815133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.815143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.815455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.815466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.815750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.815760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.816148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.816161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.816483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.816494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.816811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.816821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.817157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.817168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.385 [2024-10-14 14:42:45.817524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.385 [2024-10-14 14:42:45.817534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.385 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.817816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.817829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.818099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.818110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.818494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.818504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.818780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.818790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.819070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.819080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.819414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.819424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.819555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.819565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.819853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.819863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.820166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.820177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.820469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.820479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.820763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.820773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.821020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.821030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.821325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.821336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.821625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.821634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.821828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.821839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.822185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.822197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.822566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.822577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.822861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.822872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.823173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.823184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.823475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.823485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.823749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.823759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.823954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.823964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.824361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.824373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.824681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.824691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.824915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.824925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.825209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.825220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.825310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.825320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.825621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.825633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.825805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.825816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.826130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.826142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.826468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.826479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.826776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.826787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.827095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.827105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.827306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.827316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.827528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.827538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.827930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.827941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.828235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.828248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.828605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.828616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.828801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.828811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.829176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.829187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.386 qpair failed and we were unable to recover it. 00:29:05.386 [2024-10-14 14:42:45.829474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.386 [2024-10-14 14:42:45.829486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.829817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.829827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.830122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.830133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.830391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.830401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.830687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.830697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.831013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.831023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.831354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.831365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.831587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.831597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.831764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.831775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.831986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.831997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.832350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.832362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.832641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.832652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.832956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.832968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.833144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.833155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.833457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.833469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.833777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.833788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.834094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.834105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.834395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.834405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.834602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.834612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.834842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.834853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.835038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.835048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.835376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.835386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.835741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.835751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.836058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.836080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.836275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.836286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.836395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.836405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.836626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.836636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.836967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.836978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.837184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.837198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.837424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.837435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.837711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.837722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.838017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.838027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.838324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.838335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.838664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.838674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.838997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.839006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.839317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.839327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.839695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.839705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.840022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.840032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.840379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.840389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.840679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.840690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.841001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.387 [2024-10-14 14:42:45.841011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.387 qpair failed and we were unable to recover it. 00:29:05.387 [2024-10-14 14:42:45.841272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.841283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.841605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.841615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.841893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.841903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.842201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.842212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.842558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.842569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.842855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.842865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.843170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.843180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.843503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.843514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.843855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.843866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.844250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.844261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.844601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.844611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.844927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.844937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.845249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.845261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.845587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.845598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.845906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.845919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.846118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.846128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.846460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.846470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.846829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.846839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.847151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.847162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.847466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.847477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.847857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.847868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.848189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.848199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.848428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.848439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.848717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.848727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.848950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.848960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.849223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.849233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.849561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.849571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.849867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.849876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.850068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.850078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.850404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.850414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.850751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.850761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.851049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.851059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.851429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.851440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.851720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.851730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.852067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.852079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.852393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.852403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.852671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.852681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.852889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.852900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.853067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.853078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.853392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.853403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.853714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.388 [2024-10-14 14:42:45.853723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.388 qpair failed and we were unable to recover it. 00:29:05.388 [2024-10-14 14:42:45.854035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.854044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.854275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.854285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.854613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.854622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.854809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.854819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.855138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.855148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.855351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.855360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.855689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.855699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.855799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.855808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.856111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.856160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.856468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.856478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.856768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.856778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.857107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.857117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.857419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.857429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.857742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.857751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.858108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.858120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.858425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.858435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.858714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.858724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.859017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.859027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.859338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.859349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.859668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.859678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.859903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.859914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.860252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.860262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.860566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.860584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.860931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.860941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.861295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.861305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.861634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.861644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.861831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.861841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.862178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.862188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.862488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.862499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.862809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.862819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.863097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.863107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.863323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.863333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.863624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.863633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.389 [2024-10-14 14:42:45.863953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.389 [2024-10-14 14:42:45.863962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.389 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.864264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.864274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.864596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.864606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.864842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.864852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.865016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.865027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.865322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.865332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.865535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.865545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.865934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.865944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.866132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.866142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.866459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.866470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.866730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.866740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.867020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.867030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.867439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.867450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.867785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.867796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.868105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.868115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.868377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.868387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.868736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.868748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.869084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.869095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.869308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.869318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.869512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.869523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.869852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.869863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.870171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.870182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.870508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.870520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.870828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.870838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.871097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.871108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.871428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.871439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.871641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.871651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.871912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.871922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.872287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.872297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.872588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.872599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.872902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.872912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.873293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.873303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.873598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.873609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.873903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.873913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.874325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.874335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.874639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.874649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.874958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.874968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.875233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.875243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.875557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.875567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.875874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.875884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.876204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.876215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.390 [2024-10-14 14:42:45.876514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.390 [2024-10-14 14:42:45.876524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.390 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.876829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.876840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.877131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.877141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.877451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.877471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.877799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.877809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.878113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.878123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.878494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.878504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.878807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.878817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.879102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.879115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.879356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.879367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.879669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.879679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.879994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.880005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.880306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.880316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.880620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.880630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.880933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.880943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.881236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.881247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.881524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.881534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.881840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.881850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.882041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.882052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.882373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.882383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.882669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.882679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.882984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.882994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.883386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.883397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.883723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.883733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.884029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.884040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.884351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.884361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.884637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.884646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.884951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.884962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.885252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.885263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.885538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.885548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.885853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.885865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.886168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.886179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.886385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.886395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.886614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.886625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.886932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.886942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.887236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.887246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.887543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.887554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.887856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.887866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.888072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.888082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.888414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.888425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.888704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.391 [2024-10-14 14:42:45.888714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.391 qpair failed and we were unable to recover it. 00:29:05.391 [2024-10-14 14:42:45.889013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.889023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.889343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.889353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.889646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.889655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.889926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.889935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.890255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.890265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.890564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.890575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.890872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.890882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.891143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.891153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.891463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.891475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.891779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.891789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.892099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.892109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.892419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.892429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.892752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.892762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.893096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.893106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.893465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.893475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.893800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.893810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.894122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.894132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.894338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.894349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.894668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.894679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.894877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.894887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.895089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.895099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.895430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.895440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.895645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.895654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.895960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.895970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.896255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.896265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.896471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.896480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.896764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.896773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.897053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.897067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.897380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.897390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.897697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.897706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.898042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.898052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.898350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.898361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.898667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.898678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.898921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.898931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.899243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.899255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.899534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.899545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.899866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.899877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.900184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.900194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.900483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.900492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.900770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.900779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.901089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.392 [2024-10-14 14:42:45.901100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.392 qpair failed and we were unable to recover it. 00:29:05.392 [2024-10-14 14:42:45.901461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.901471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.901752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.901761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.902123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.902133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.902459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.902469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.902748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.902758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.903038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.903058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.903382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.903392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.903699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.903709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.904012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.904022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.904319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.904339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.904663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.904673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.904941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.904951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.905271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.905281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.905564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.905574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.905852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.905862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.906165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.906175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.906489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.906499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.906780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.906790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.907106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.907116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.907405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.907414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.907730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.907741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.908022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.908040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.908348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.908358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.908674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.908683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.908849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.908860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.909152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.909162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.909359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.909369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.909737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.909747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.910070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.910080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.910155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.910165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.910387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.910398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.910700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.910710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.910988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.910999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.911346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.911356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.911640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.911651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.911987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.393 [2024-10-14 14:42:45.912000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.393 qpair failed and we were unable to recover it. 00:29:05.393 [2024-10-14 14:42:45.912193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.912204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.912479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.912489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.912826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.912836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.913219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.913229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.913511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.913520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.913861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.913871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.914096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.914107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.914379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.914388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.914709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.914719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.915026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.915036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.915336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.915346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.915657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.915667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.915993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.916002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.916357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.916368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.916645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.916656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.916960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.916971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.917275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.917285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.917592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.917602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.917930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.917941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.918271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.918281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.918570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.918581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.918868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.918878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.919164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.919174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.919498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.919508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.919813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.919823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.920127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.920137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.920434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.920446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.920828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.920839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.921029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.921039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.921319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.921330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.921655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.921666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.921974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.921984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.922271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.922282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.922585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.922596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.922953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.922963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.923258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.923269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.923576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.923586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.923907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.923917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.924258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.924268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.924555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.924565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.924893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.394 [2024-10-14 14:42:45.924902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.394 qpair failed and we were unable to recover it. 00:29:05.394 [2024-10-14 14:42:45.925208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.925219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.925509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.925519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.925823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.925832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.926123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.926134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.926248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.926257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.926566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.926576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.926881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.926891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.927197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.927208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.927504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.927513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.927812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.927822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.928116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.928126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.928437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.928446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.928753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.928763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.929052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.929065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.929329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.929339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.929549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.929559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.929885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.929895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.930222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.930232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.930538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.930548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.930873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.930882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.931199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.931209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.931536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.931546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.931725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.931736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.931925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.931935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.932264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.932274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.932583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.932593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.932899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.932911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.933222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.933232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.933621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.933632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.933936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.933948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.934279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.934289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.934676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.934686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.934986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.934996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.935291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.935302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.935626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.935637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.935929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.935940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.936230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.936240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.936531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.936542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.936845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.936855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.937159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.937170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.395 [2024-10-14 14:42:45.937484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.395 [2024-10-14 14:42:45.937494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.395 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.937783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.937795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.938106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.938116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.938413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.938423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.938696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.938706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.938877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.938888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.939209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.939219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.939544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.939554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.939816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.939827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.940146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.940157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.940466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.940476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.940648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.940658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.940951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.940961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.941274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.941286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.941606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.941616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.941957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.941967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.942298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.942308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.942593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.942604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.942917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.942927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.943219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.943229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.943550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.943560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.943843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.943861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.944188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.944198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.944502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.944513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.944819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.944830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.945149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.945159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.945514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.945524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.945818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.945827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.946029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.946039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.946350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.946360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.946552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.946563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.946876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.946886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.947935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.947959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.948331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.948342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.948637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.948647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.948957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.948967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.949257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.949268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.949558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.949568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.949848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.949858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.950161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.950171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.950484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.950494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.396 qpair failed and we were unable to recover it. 00:29:05.396 [2024-10-14 14:42:45.950789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.396 [2024-10-14 14:42:45.950798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.950993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.951003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.951191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.951201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.951470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.951480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.951766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.951775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.952099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.952111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.952438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.952449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.952730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.952740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.953022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.953032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.953337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.953347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.953664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.953674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.953981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.953991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.954343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.954354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.954638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.954650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.954954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.954964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.955282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.955292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.955604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.955613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.955918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.955928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.956227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.956237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.956436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.956447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.956645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.956655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.956920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.956930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.957254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.957265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.957452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.957462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.957749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.957759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.958067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.958077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.958376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.958385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.958769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.958780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.959125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.959136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.959463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.959473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.959785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.959794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.960101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.960111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.960404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.960422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.960746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.960756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.961060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.961074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.961361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.961371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.961651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.961661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.961979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.961989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.397 qpair failed and we were unable to recover it. 00:29:05.397 [2024-10-14 14:42:45.962293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.397 [2024-10-14 14:42:45.962304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.962602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.962614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.962950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.962961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.963268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.963279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.963461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.963472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.963783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.963793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.964073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.964084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.964364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.964374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.964655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.964674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.965042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.965052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.965332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.965343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.965624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.965633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.965937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.965947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.966276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.966286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.966566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.966576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.966877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.966887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.967190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.967200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.967499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.967510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.967795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.967805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.968073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.968084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.968403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.968419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.968751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.968761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.969051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.969061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.969334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.969345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.969556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.969566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.969890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.969900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.970185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.970195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.970510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.970520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.970700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.970709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.970890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.970900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.971229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.971240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.971527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.971538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.971721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.971731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.972053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.972066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.972366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.972376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.972677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.972686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.972989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.972998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.973289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.973300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.973584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.398 [2024-10-14 14:42:45.973593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.398 qpair failed and we were unable to recover it. 00:29:05.398 [2024-10-14 14:42:45.973906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.973916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.974227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.974237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.974545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.974555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.974837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.974846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.975035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.975048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.975299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.975310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.975644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.975654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.975859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.975869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.976138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.976149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.976359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.976369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.976715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.976725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.977050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.977060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.977383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.977393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.977699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.977708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.977884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.977893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.978066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.978077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.978439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.978449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.978765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.978774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.979106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.979116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.979322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.979332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.979664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.979675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.979888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.979898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.980218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.980229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.980538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.980547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.980747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.980757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.981032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.981042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.981383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.981394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.981727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.981738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.982042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.982052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.982266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.982275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.982583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.982594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.982784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.982794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.983723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.983744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.984024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.984035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.984336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.984347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.984529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.984540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.984876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.984886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.985294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.985305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.985615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.985625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.985910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.985920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.986227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.399 [2024-10-14 14:42:45.986238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.399 qpair failed and we were unable to recover it. 00:29:05.399 [2024-10-14 14:42:45.986438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.986448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.986711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.986721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.987038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.987047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.987367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.987378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.987715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.987727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.988025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.988035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.988349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.988359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.988663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.988673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.988980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.988990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.989388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.989399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.989716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.989727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.989989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.990001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.990277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.990288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.990589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.990600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.990902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.990913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.991259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.991271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.991548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.991558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.991833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.991843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.992129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.992140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.992447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.992457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.992778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.992788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.993106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.993116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.993425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.993434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.993715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.993726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.994024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.994035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.994346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.994357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.994647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.994657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.995044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.995054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.995361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.995372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.995698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.995707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.995908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.995918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.996204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.996216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.996508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.996518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.996862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.996872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.997163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.997173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.997393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.997403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.997701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.997711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.998020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.998030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.998332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.998343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.998651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.998662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.998965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.400 [2024-10-14 14:42:45.998976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.400 qpair failed and we were unable to recover it. 00:29:05.400 [2024-10-14 14:42:45.999293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:45.999303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:45.999590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:45.999600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:45.999766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:45.999775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.000079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.000091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.000451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.000462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.000754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.000764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.001070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.001081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.001453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.001464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.001762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.001772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.002142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.002152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.002490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.002500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.002807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.002817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.003129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.003140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.003456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.003466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.003641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.003652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.003967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.003977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.004184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.004194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.004512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.004523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.004813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.004823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.005146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.005157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.005490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.005500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.005780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.005790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.005986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.005997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.006200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.006211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.006447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.006457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.006738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.006748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.006939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.006949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.007221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.007232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.007528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.007537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.007818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.007828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.008027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.008037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.008355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.008368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.008646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.008656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.008944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.008954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.009240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.009250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.009563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.009574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.009752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.009762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.010131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.010141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.010471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.010481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.010785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.010796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.011089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.401 [2024-10-14 14:42:46.011099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.401 qpair failed and we were unable to recover it. 00:29:05.401 [2024-10-14 14:42:46.011398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.011408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.011701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.011711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.012605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.012626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.012958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.012971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.013962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.013985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.014277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.014289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.014597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.014609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.014923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.014946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.015287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.015301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.015601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.015612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.015931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.015941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.016231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.016242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.016498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.016508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.016618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.016628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.016815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dc0f0 is same with the state(6) to be set 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Write completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Write completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Write completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Write completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Write completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Write completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Write completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Write completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Read completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 Write completed with error (sct=0, sc=8) 00:29:05.402 starting I/O failed 00:29:05.402 [2024-10-14 14:42:46.017614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:05.402 [2024-10-14 14:42:46.017988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.018039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe644000b90 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.018430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.018441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.018754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.018764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.019072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.019082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.019415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.019426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.019640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.019650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.019863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.019874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.020153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.020163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.020489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.020499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.020733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.020743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.021127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.402 [2024-10-14 14:42:46.021138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.402 qpair failed and we were unable to recover it. 00:29:05.402 [2024-10-14 14:42:46.021455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.021465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.021776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.021794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.022112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.022123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.022443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.022453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.022747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.022757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.023055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.023072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.023374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.023383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.023710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.023720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.023892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.023902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.024205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.024215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.024550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.024560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.024863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.024874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.025205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.025217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.025598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.025608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.025899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.025909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.026306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.026316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.026656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.026666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.026966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.026976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.027298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.027308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.027610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.027620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.027936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.027947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.028316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.028327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.028653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.028664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.028963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.028975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.029318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.029329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.029661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.029672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.029971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.029982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.030288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.030300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.030655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.030665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.031004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.031014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.031325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.031337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.031633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.031644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.031836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.031846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.032133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.032143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.032457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.032467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.032773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.032783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.032958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.032968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.033375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.033385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.033579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.033590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.033782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.403 [2024-10-14 14:42:46.033795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.403 qpair failed and we were unable to recover it. 00:29:05.403 [2024-10-14 14:42:46.034190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.034200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.034512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.034523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.034711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.034722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.035024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.035034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.035340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.035350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.035676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.035687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.035983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.035993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.036361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.036371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.036536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.036547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.036854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.036864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.037222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.037233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.037564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.037574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.037743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.037755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.038111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.038122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.038452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.038462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.038785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.038796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.039029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.039040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.039345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.039355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.039646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.039657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.040005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.040016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.040313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.040324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.040616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.040627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.040946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.040957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.041256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.041267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.041569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.041581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.041943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.041954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.042280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.042292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.042583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.042594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.042782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.042793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.043095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.043106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.043531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.043541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.043818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.043828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.044127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.044137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.044347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.044357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.044680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.044690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.044978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.044988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.045322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.045332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.045636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.045655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.045990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.046000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.046242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.046252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.404 [2024-10-14 14:42:46.046586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.404 [2024-10-14 14:42:46.046598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.404 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.046873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.046884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.047180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.047190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.047522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.047533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.047849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.047859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.048156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.048166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.048490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.048500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.048763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.048773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.049059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.049073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.049382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.049393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.049725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.049736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.050013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.050023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.050249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.050261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.050580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.050590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.050778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.050788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.051005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.051016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.051318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.051328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.051493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.051504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.051826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.051837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.052049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.052060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.052271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.052283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.052569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.052580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.052800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.052811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.053098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.053108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.053409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.053420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.053623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.053633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.053910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.053920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.054131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.054143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.054448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.054459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.054778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.054788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.055068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.055079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.055289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.055300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.055624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.055634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.055918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.055929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.056214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.056225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.056496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.056506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.056780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.056790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.057133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.057143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.057446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.057455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.057787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.057797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.058113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.058125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.405 [2024-10-14 14:42:46.058394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.405 [2024-10-14 14:42:46.058404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.405 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.058715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.058726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.059037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.059046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.059438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.059448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.059738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.059749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.060060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.060073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.060380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.060390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.060706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.060717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.061017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.061028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.061332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.061342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.061743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.061754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.061979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.061989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.062331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.062343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.062648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.062659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.062973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.062984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.063331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.063342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.063633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.063643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.063953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.063963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.064308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.064320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.064626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.064637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.064851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.064862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.065022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.065033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.065210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.065220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.065603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.065613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.065928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.065939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.066178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.066190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.066541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.066552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.066859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.066871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.067176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.067186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.067477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.067487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.067849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.067860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.068192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.068203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.068495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.068505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.068784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.068793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.069082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.069092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.069408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.069418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.406 [2024-10-14 14:42:46.069581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.406 [2024-10-14 14:42:46.069592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.406 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.069854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.069872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.070184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.070197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.070406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.070425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.070594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.070610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.070955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.070968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.071293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.071304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.071542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.071551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.071861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.071871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.072167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.072177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.072545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.072554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.072785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.072795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.073086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.073096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.073282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.073292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.073586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.073595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.073894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.073904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.074179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.074190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.074438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.074447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.074883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.074896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.075135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.075145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.075483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.075493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.075861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.075872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.076058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.076077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.076389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.076398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.076680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.076690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.077001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.077011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.077215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.077225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.077528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.077537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.077842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.077859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.078179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.078189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.078382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.078391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.078735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.078745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.079139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.079150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.079473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.079483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.079865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.079875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.080158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.080168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.080474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.080484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.080688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.080697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.081023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.081033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.081342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.081353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.081622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.081631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.407 [2024-10-14 14:42:46.081947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.407 [2024-10-14 14:42:46.081956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.407 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.082320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.082331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.082645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.082655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.082848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.082858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.083050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.083060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.083382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.083392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.083590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.083600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.083851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.083861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.084059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.084076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.084418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.084427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.084638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.084648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.084944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.084954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.085253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.085263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.085557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.085567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.085880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.085890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.086218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.086228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.086503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.086513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.086842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.086851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.087171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.087184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.087482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.087492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.087812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.087822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.088127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.088138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.088432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.088449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.088754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.088763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.089004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.089013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.089330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.089340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.089648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.089658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.089905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.089915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.090227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.090238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.090438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.090448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.090771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.090781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.090855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.090865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.091152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.091163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.091481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.091492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.091793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.091803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.092123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.092140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.092311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.092322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.092631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.092641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.092921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.092930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.093097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.093108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.093426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.093435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.408 qpair failed and we were unable to recover it. 00:29:05.408 [2024-10-14 14:42:46.093732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.408 [2024-10-14 14:42:46.093743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.094047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.094057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.094343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.094353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.094638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.094647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.094938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.094948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.095268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.095281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.095609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.095621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.095956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.095967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.096248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.096259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.096448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.096459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.096821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.096832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.097131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.097141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.097454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.097464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.097766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.097776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.098061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.098074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.098385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.098395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.098700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.098710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.099014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.099024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.099309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.099319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.099631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.099640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.099971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.099980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.100268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.100278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.409 [2024-10-14 14:42:46.100574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.409 [2024-10-14 14:42:46.100583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.409 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.100987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.100999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.101291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.101303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.101613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.101623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.101933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.101943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.102311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.102321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.102540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.102549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.102868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.102878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.103174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.103184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.103497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.103506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.103842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.103853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.104184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.104194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.104556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.104566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.104755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.104766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.104931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.104941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.105254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.105264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.105575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.105585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.105849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.105858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.106182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.106193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.106482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.106492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.106808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.684 [2024-10-14 14:42:46.106818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.684 qpair failed and we were unable to recover it. 00:29:05.684 [2024-10-14 14:42:46.107127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.107137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.107451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.107461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.107682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.107694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.108033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.108042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.108345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.108356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.108643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.108653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.108945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.108963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.109367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.109377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.109681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.109691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.110019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.110028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.110338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.110347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.110668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.110679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.110851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.110863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.111230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.111241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.111613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.111623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.111953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.111963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.112269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.112279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.112574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.112584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.112897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.112907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.113222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.113232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.113523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.113532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.113784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.113794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.114084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.114094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.114404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.114414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.114720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.114729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.115037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.115047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.115331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.115342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.115646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.115656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.115965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.115974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.116278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.116288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.116591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.116601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.116917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.116927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.117235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.117246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.117554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.117563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.117865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.117874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.118229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.118239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.118507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.118517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.118839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.118850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.119207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.119217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.119519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.119529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.119685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.685 [2024-10-14 14:42:46.119696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.685 qpair failed and we were unable to recover it. 00:29:05.685 [2024-10-14 14:42:46.120006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.120015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.120401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.120411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.120708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.120718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.121039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.121049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.121361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.121372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.121676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.121686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.121964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.121974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.122294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.122304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.122580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.122590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.122871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.122881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.123091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.123101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.123420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.123430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.123709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.123719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.124025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.124035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.124345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.124356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.124633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.124643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.124950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.124960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.125282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.125292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.125637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.125648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.125955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.125966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.126278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.126288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.126575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.126585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.126913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.126924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.127232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.127242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.127568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.127578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.127763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.127773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.128084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.128094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.128329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.128339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.128555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.128565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.128844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.128856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.129133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.129143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.129465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.129475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.129759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.129775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.130107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.130117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.130431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.130441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.130749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.130759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.131054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.131073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.131369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.131379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.131683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.131693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.131990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.131999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.686 [2024-10-14 14:42:46.132349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.686 [2024-10-14 14:42:46.132360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.686 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.132664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.132673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.132952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.132963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.133235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.133246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.133560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.133571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.133902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.133912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.134231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.134241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.134571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.134581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.134884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.134894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.135200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.135211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.135500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.135511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.135815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.135825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.136133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.136144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.136338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.136348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.136664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.136674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.136993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.137003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.137300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.137311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.137608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.137618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.137926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.137936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.138152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.138162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.138486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.138496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.138806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.138816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.139094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.139104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.139430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.139439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.139626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.139636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.139981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.139991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.140306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.140317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.140614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.140623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.140936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.140946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.141270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.141280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.141519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.141533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.141856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.141866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.142204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.142214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.142524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.142534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.142840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.142850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.143155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.143165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.143445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.143454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.143761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.143771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.143957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.143968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.144306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.144316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.144716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.144725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.687 [2024-10-14 14:42:46.145018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.687 [2024-10-14 14:42:46.145028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.687 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.145336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.145346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.145507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.145518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.145870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.145880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.146082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.146092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.146393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.146403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.146734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.146744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.147048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.147059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.147289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.147300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.147604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.147614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.147919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.147929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.148222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.148232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.148544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.148562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.148859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.148869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.149184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.149195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.149512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.149521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.149707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.149719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.150038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.150048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.150349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.150359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.150670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.150679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.150981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.150990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.151275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.151285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.151597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.151606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.151955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.151965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.152261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.152273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.152598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.152609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.152917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.152927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.153305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.153315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.153596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.153606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.153909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.153919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.154254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.154264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.154466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.154476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.154749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.154758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.155045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.155055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.155332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.155342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-10-14 14:42:46.155507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.688 [2024-10-14 14:42:46.155518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.155745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.155755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.156084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.156095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.156392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.156401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.156720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.156730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.156885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.156895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.157226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.157236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.157539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.157550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.157862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.157873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.158179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.158189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.158472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.158481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.158834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.158844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.159132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.159143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.159467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.159477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.159785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.159795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.160104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.160114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.160423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.160433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.160734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.160744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.161127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.161137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.161468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.161478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.161769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.161779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.162113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.162122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.162324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.162337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.162692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.162702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.163010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.163021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.163361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.163371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.163676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.163686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.163992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.164003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.164302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.164313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.164583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.164593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.164899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.164909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.165107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.165118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.165407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.165417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.165719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.165729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.166010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.166019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.166325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.166335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.166644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.166654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.166833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.166843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.167166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.167176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.167504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.167514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.167809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.689 [2024-10-14 14:42:46.167818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-10-14 14:42:46.168109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.168119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.168433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.168442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.168629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.168638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.168998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.169008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.169328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.169338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.169608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.169618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.169920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.169930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.170235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.170246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.170445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.170458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.170547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.170558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.170862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.170873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.171076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.171087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.171269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.171279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.171596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.171606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.171893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.171908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.172127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.172137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.172480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.172489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.172795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.172804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.173115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.173125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.173440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.173449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.173753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.173763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.174054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.174066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.174355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.174365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.174667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.174677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.174999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.175009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.175354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.175364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.175673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.175683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.176009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.176019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.176340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.176350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.176696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.176706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.177004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.177015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.177327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.177336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.177617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.177636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.177914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.177923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.178229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.178239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.178505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.178515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.178823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.178833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.179157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.179167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.179507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.179517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.179817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.179827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-10-14 14:42:46.180009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.690 [2024-10-14 14:42:46.180020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.180300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.180310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.180705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.180714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.180979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.180989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.181312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.181322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.181602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.181612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.181922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.181933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.182136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.182147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.182453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.182463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.182776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.182788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.183090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.183100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.183402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.183420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.183729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.183738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.184053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.184072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.184249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.184258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.184582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.184591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.184903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.184913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.185283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.185293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.185596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.185605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.185850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.185860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.186019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.186029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.186350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.186360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.186584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.186594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.186922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.186933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.187225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.187235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.187532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.187542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.187869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.187878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.188191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.188201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.188508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.188517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.188795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.188804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.189105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.189115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.189426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.189436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.189773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.189783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.190082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.190093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.190399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.190409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.190694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.190704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.191020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.191031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.191340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.191350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.191677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.191686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.191987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.191997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.192303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.192313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.691 [2024-10-14 14:42:46.192589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.691 [2024-10-14 14:42:46.192599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.691 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.192907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.192917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.193223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.193234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.193482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.193492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.193817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.193827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.194131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.194141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.194455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.194464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.194776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.194786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.195069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.195080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.195355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.195366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.195673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.195684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.195986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.195997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.196297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.196308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.196616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.196626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.196932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.196942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.197262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.197273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.197512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.197521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.197839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.197849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.198140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.198150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.198459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.198469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.198776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.198786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.199113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.199126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.199439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.199449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.199732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.199751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.200090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.200100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.200294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.200304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.200457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.200468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.200761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.200771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.200981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.200991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.201304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.201315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.201530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.201540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.201843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.201853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.202067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.202077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.202260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.202270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.202594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.202604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.202788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.202797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.203075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.203089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.203379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.203389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.203577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.203587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.203870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.203881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.204187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.204197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.692 qpair failed and we were unable to recover it. 00:29:05.692 [2024-10-14 14:42:46.204484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.692 [2024-10-14 14:42:46.204495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.204825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.204834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.205147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.205157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.205385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.205394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.205703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.205713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.205975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.205985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.206297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.206307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.206586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.206596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.206880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.206890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.207085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.207096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.207367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.207376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.207760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.207770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.208074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.208084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.208388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.208397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.208703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.208712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.209023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.209033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.209313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.209323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.209637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.209647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.209988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.209998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.210355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.210365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.210670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.210679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.211100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.211110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.211425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.211435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.211744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.211755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.212080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.212091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.212370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.212380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.212686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.212696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.212866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.212876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.213159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.213169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.213501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.213511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.213891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.213900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.214179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.214189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.214517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.214527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.214802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.214812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.215151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.215162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.215468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.215478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.693 qpair failed and we were unable to recover it. 00:29:05.693 [2024-10-14 14:42:46.215755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.693 [2024-10-14 14:42:46.215765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.216081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.216092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.216372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.216382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.216673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.216684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.216964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.216974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.217284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.217295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.217608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.217618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.217907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.217916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.218231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.218241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.218513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.218523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.218822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.218832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.219178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.219188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.219491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.219500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.219780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.219789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.220100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.220110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.220388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.220397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.220691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.220701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.221008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.221018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.221327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.221338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.221519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.221530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.221834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.221845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.222153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.222163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.222456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.222466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.222770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.222780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.223087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.223097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.223404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.223414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.223718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.223727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.223929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.223940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.224262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.224272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.224594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.224603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.224924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.224934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.225265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.225276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.225586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.225597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.225868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.225878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.226052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.226066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.226389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.226400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.226600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.226609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.226831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.226841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.227184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.227194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.227393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.227402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.227701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.694 [2024-10-14 14:42:46.227711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.694 qpair failed and we were unable to recover it. 00:29:05.694 [2024-10-14 14:42:46.228060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.228075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.228372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.228382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.228555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.228565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.228879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.228889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.229199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.229209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.229403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.229412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.229693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.229710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.230021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.230030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.230326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.230336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.230651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.230661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.230867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.230877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.231209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.231219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.231532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.231542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.231855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.231865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.232073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.232083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.232387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.232397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.232712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.232721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.233046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.233056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.233347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.233357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.233706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.233716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.233923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.233933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.234215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.234226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.234551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.234561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.234842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.234852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.235158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.235169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.235417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.235426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.235717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.235727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.236044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.236055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.236360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.236370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.236654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.236664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.236990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.236999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.237306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.237316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.237641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.237652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.237955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.237965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.238255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.238266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.238572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.238581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.238892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.238902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.239211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.239222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.239443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.239453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.239561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.239570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.239761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-10-14 14:42:46.239771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.695 qpair failed and we were unable to recover it. 00:29:05.695 [2024-10-14 14:42:46.240072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.240082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.240306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.240316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.240521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.240531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.240835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.240845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.241148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.241158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.241376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.241386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.241697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.241707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.241986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.241996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.242320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.242330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.242628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.242638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.242963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.242972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.243293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.243303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.243586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.243603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.243921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.243933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.244309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.244319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.244601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.244619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.244933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.244943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.245216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.245226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.245416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.245425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.245720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.245730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.246043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.246053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.246368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.246379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.246655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.246665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.246849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.246860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.247182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.247192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.247496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.247506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.247810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.247820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.248240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.248251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.248528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.248538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.248845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.248854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.249146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.249156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.249475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.249485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.249799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.249808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.250108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.250118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.250413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.250431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.250746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.250756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.251053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.251074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.251399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.251408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.251711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.251721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.252045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.252055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.696 qpair failed and we were unable to recover it. 00:29:05.696 [2024-10-14 14:42:46.252334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-10-14 14:42:46.252345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.252728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.252738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.253044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.253054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.253364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.253375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.253656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.253666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.253993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.254004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.254309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.254320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.254632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.254643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.254971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.254982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.255306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.255317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.255635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.255646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.255926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.255938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.256234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.256245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.256573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.256583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.256876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.256888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.257246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.257257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.257529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.257539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.257817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.257827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.258137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.258147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.258428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.258438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.258722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.258732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.259050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.259061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.259407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.259417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.259651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.259661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.259944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.259954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.260261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.260271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.260545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.260555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.260867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.260877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.261190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.261201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.261479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.261490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.261776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.261787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.262136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.262146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.262451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.262462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.262623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.262634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.262972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.262982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.697 [2024-10-14 14:42:46.263164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-10-14 14:42:46.263175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.697 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.263516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.263526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.263828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.263838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.264173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.264184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.264473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.264485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.264691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.264701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.264874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.264889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.265200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.265210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.265511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.265521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.265808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.265818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.266094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.266104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.266408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.266418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.266762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.266771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.267076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.267086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.267404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.267415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.267698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.267708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.267881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.267892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.268247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.268257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.268459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.268468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.268788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.268798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.269079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.269090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.269396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.269406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.269690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.269701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.270011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.270021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.270314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.270325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.270634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.270645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.271035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.271046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.271359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.271370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.271646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.271656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.271995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.272006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.272313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.272323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.272609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.272619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.272801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.272813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.273171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.273181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.273435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.273445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.273742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.273752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.274042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.274052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.274316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.274326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.274639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.274649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.274941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.274951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.275272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.275283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.698 [2024-10-14 14:42:46.275476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.698 [2024-10-14 14:42:46.275486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.698 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.275834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.275844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.276154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.276165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.276462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.276472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.276749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.276760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.277073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.277084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.277391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.277403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.277780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.277791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.278110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.278121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.278429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.278439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.278728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.278738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.279047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.279056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.279339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.279349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.279681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.279691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.279999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.280010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.280192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.280208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.280516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.280526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.280835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.280846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.281143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.281154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.281434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.281444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.281751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.281762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.282071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.282083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.282399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.282409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.282714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.282724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.282993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.283003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.283196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.283207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.283552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.283562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.283873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.283883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.284189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.284200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.284379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.284389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.284749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.284759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.285083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.285094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.285427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.285437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.285741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.285753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.286066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.286076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.286379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.286389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.286706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.286716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.287096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.287108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.287425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.287435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.287740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.287750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.288030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.699 [2024-10-14 14:42:46.288040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.699 qpair failed and we were unable to recover it. 00:29:05.699 [2024-10-14 14:42:46.288311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.288322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.288628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.288638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.288923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.288933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.289125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.289136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.289532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.289543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.289835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.289845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.290187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.290198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.290519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.290529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.290814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.290824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.291093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.291103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.291407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.291418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.291703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.291714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.292017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.292027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.292334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.292345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.292630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.292640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.292955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.292965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.293289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.293301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.293636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.293647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.293951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.293962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.294234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.294245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.294541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.294552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.294859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.294869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.295196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.295208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.295538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.295549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.295851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.295862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.296252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.296264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.296559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.296570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.296872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.296882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.297205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.297216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.297449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.297460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.297646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.297656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.297962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.297972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.298260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.298270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.298589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.298601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.298810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.298821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.299146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.299157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.299356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.299367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.299567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.299577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.299892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.299903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.300173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.300184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.300516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.700 [2024-10-14 14:42:46.300527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.700 qpair failed and we were unable to recover it. 00:29:05.700 [2024-10-14 14:42:46.300857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.300868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.301171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.301182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.301558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.301568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.301886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.301896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.302175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.302185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.302500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.302510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.302836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.302846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.303173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.303183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.303515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.303526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.303732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.303741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.303964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.303975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.304345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.304356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.304664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.304674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.305033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.305043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.305330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.305340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.305667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.305677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.305953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.305964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.306173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.306185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.306518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.306528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.306776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.306787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.307133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.307144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.307466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.307476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.307790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.307799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.308082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.308092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.308402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.308412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.308676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.308686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.309003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.309015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.309296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.309307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.309612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.309623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.309929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.309940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.310259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.310271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.310576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.310586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.310897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.310907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.311225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.311236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.701 [2024-10-14 14:42:46.311541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.701 [2024-10-14 14:42:46.311552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.701 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.311857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.311867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.312146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.312158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.312500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.312510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.312785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.312795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.313123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.313134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.313442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.313452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.313759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.313769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.314053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.314066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.314375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.314385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.314667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.314677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.314870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.314881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.315153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.315163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.315461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.315471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.315809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.315819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.316125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.316137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.316452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.316463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.316755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.316765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.317072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.317082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.317399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.317409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.317690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.317699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.318011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.318021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.318338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.318349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.318627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.318637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.318955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.318965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.319268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.319285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.319627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.319639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.319956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.319966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.320298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.320308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.320676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.320687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.321001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.321011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.321329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.321339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.321614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.321623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.321930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.321940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.322236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.322246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.322467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.322476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.322778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.322788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.323100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.323110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.323432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.323441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.323644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.323653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.323865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.323875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.702 qpair failed and we were unable to recover it. 00:29:05.702 [2024-10-14 14:42:46.324166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.702 [2024-10-14 14:42:46.324176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.324380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.324389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.324666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.324675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.324986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.324995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.325303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.325313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.325622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.325632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.325932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.325942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.326263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.326274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.326579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.326590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.326869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.326878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.327222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.327232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.327539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.327548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.327826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.327836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.328146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.328157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.328502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.328512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.328814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.328823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.329147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.329157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.329473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.329483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.329772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.329788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.330101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.330111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.330423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.330433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.330760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.330770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.331057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.331070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.331424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.331434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.331718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.331736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.332075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.332086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.332246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.332258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.332561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.332571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.332894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.332904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.333289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.333299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.333582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.333592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.333905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.333915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.334187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.334197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.334496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.334506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.334832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.334842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.335149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.335159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.335447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.335464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.335765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.335775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.336090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.336100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.336406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.336415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.336728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.336738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.703 qpair failed and we were unable to recover it. 00:29:05.703 [2024-10-14 14:42:46.337050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.703 [2024-10-14 14:42:46.337061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.337398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.337408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.337711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.337721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.337989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.337999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.338281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.338292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.338604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.338614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.338952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.338961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.339188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.339197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.339357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.339367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.339684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.339694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.340044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.340055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.340380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.340390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.340558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.340571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.340782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.340792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.341054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.341068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.341363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.341373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.341690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.341700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.341969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.341978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.342179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.342189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.342528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.342537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.342753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.342763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.343088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.343098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.343400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.343409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.343727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.343737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.344020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.344038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.344365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.344375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.344537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.344547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.344901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.344910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.345209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.345219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.345534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.345544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.345832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.345842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.346120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.346131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.346351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.346361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.346725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.346734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.347038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.347048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.347335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.347345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.347587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.347597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.347797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.347809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.348114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.348124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.348440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.348450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.348789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.704 [2024-10-14 14:42:46.348799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.704 qpair failed and we were unable to recover it. 00:29:05.704 [2024-10-14 14:42:46.349088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.349098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.349313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.349322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.349620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.349630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.349940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.349950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.350241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.350251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.350553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.350563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.350878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.350888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.351215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.351226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.351522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.351533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.351762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.351772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.352103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.352113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.352435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.352446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.352749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.352761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.353060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.353081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.353333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.353343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.353635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.353645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.353958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.353968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.354278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.354289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.354596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.354606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.354919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.354929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.355234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.355245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.355444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.355454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.355763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.355773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.355959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.355969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.356283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.356293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.356610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.356620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.356905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.356916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.357236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.357247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.357536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.357546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.357707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.357717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.357982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.357995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.358304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.358315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.358601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.358611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.358923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.358933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.359238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.359248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.359542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.359551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.359855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.359865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.360171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.360181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.360555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.360566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.360875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.360887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.361188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.705 [2024-10-14 14:42:46.361198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.705 qpair failed and we were unable to recover it. 00:29:05.705 [2024-10-14 14:42:46.361479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.361488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.361681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.361691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.361975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.361985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.362294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.362304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.362634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.362644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.362957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.362966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.363328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.363338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.363665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.363675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.363980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.363990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.364275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.364285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.364638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.364648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.364834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.364844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.365170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.365180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.365467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.365476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.365795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.365805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.366089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.366099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.366320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.366329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.366605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.366615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.366925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.366935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.367104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.367114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.367338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.367347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.367654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.367665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.367995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.368006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.368209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.368219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.368615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.368625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.368923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.368933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.369243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.369254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.369566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.369576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.369858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.369869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.370174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.370184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.370502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.370511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.370790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.370800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.371189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.371199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.371509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.371519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.371859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.371869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.372175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.706 [2024-10-14 14:42:46.372185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.706 qpair failed and we were unable to recover it. 00:29:05.706 [2024-10-14 14:42:46.372488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.372499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.372684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.372695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.372996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.373006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.373312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.373325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.373662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.373672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.373979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.373990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.374293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.374304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.374633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.374643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.374952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.374962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.375145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.375156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.375435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.375445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.375725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.375735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.376048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.376058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.376370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.376380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.376573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.376583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.376895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.376904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.377121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.377132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.377403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.377412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.377711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.377721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.378081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.378092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.378406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.378415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.378598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.378607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.378896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.378905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.379230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.379240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.379577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.379587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.379868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.379878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.380138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.380148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.380442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.380451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.380771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.380781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.380973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.380983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.381378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.381390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.381716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.381726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.382052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.382066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.382374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.382383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.382669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.382686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.383017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.383027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.383325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.383336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.383619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.383629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.383787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.383797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.384103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.384114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.707 [2024-10-14 14:42:46.384441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.707 [2024-10-14 14:42:46.384451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.707 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.384649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.384659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.384852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.384861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.385171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.385181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.385483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.385493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.385811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.385821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.386109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.386119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.386433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.386442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.386752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.386761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.387044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.387054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.387350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.387367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.387679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.387688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.388061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.388074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.388304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.388313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.388632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.388642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.388994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.389004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.389317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.389328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.389640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.389650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.389868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.389878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.390215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.390225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.390531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.390541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.390824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.390841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.391158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.391168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.391451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.391461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.391739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.391749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.392055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.392068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.392408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.392418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.392713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.392724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.393031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.393040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.393349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.393359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.393691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.393701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.393980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.393992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.394301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.394311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.394594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.394604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.394909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.394919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.395232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.395242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.395526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.395536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.395842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.395853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.396157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.396167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.396444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.396455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.396751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.396761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.708 qpair failed and we were unable to recover it. 00:29:05.708 [2024-10-14 14:42:46.397039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.708 [2024-10-14 14:42:46.397049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.709 qpair failed and we were unable to recover it. 00:29:05.709 [2024-10-14 14:42:46.397270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.709 [2024-10-14 14:42:46.397280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.709 qpair failed and we were unable to recover it. 00:29:05.709 [2024-10-14 14:42:46.397477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.709 [2024-10-14 14:42:46.397487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.709 qpair failed and we were unable to recover it. 00:29:05.709 [2024-10-14 14:42:46.397856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.709 [2024-10-14 14:42:46.397865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.709 qpair failed and we were unable to recover it. 00:29:05.709 [2024-10-14 14:42:46.398036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.709 [2024-10-14 14:42:46.398046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.709 qpair failed and we were unable to recover it. 00:29:05.709 [2024-10-14 14:42:46.398376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.709 [2024-10-14 14:42:46.398387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.709 qpair failed and we were unable to recover it. 00:29:05.709 [2024-10-14 14:42:46.398698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.709 [2024-10-14 14:42:46.398708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.709 qpair failed and we were unable to recover it. 00:29:05.709 [2024-10-14 14:42:46.399008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.709 [2024-10-14 14:42:46.399018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.709 qpair failed and we were unable to recover it. 00:29:05.709 [2024-10-14 14:42:46.399324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.709 [2024-10-14 14:42:46.399334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.709 qpair failed and we were unable to recover it. 00:29:05.709 [2024-10-14 14:42:46.399540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.709 [2024-10-14 14:42:46.399550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.709 qpair failed and we were unable to recover it. 00:29:05.709 [2024-10-14 14:42:46.399739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.709 [2024-10-14 14:42:46.399749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.709 qpair failed and we were unable to recover it. 00:29:05.709 [2024-10-14 14:42:46.400070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.709 [2024-10-14 14:42:46.400080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.709 qpair failed and we were unable to recover it. 00:29:05.984 [2024-10-14 14:42:46.400383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.400394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.400720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.400731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.401033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.401042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.401247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.401257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.401577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.401587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.401775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.401788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.402149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.402159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.402443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.402461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.402764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.402773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.403080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.403090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.403278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.403289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.403612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.403622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.403902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.403911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.404101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.404111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.404437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.404447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.404752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.404763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.405042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.405053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.405382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.405392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.405706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.405715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.406005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.406016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.406299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.406310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.406611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.406622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.406904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.406915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.407218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.407231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.407537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.407547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.407833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.407843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.408164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.408175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.408420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.408430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.408722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.408732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.409039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.409048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.409361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.409372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.409737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.409746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.410046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.410056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.410227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.410239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.410578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.410587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.410869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.410879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.411186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.411196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.411586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.411596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.411867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.411877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.985 [2024-10-14 14:42:46.412205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.985 [2024-10-14 14:42:46.412215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.985 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.412497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.412506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.412720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.412730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.413041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.413050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.413424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.413435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.413737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.413747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.414052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.414064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.414399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.414411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.414709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.414719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.415020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.415030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.415327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.415338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.415634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.415644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.415840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.415850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.416132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.416142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.416424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.416434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.416749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.416759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.416939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.416948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.417252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.417262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.417470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.417480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.417828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.417839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.418027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.418038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.418326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.418337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.418616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.418627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.418922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.418932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.419259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.419270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.419555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.419571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.419761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.419772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.419964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.419974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.420275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.420285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.420632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.420641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.420888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.420898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.421196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.421207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.421511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.421520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.421827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.421837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.422168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.422179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.422580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.422589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.422867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.422877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.423168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.423179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.423485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.423495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.986 [2024-10-14 14:42:46.423776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.986 [2024-10-14 14:42:46.423785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.986 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.424069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.424079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.424368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.424378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.424672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.424682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.425020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.425030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.425343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.425353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.425561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.425571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.425896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.425906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.426177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.426187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.426516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.426526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.426855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.426865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.427168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.427178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.427499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.427510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.427804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.427814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.428135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.428145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.428458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.428467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.428753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.428763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.429075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.429085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.429289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.429298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.429599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.429608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.429876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.429886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.430197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.430207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.430528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.430538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.430837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.430848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.431124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.431143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.431546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.431556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.431865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.431874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.432190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.432200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.432528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.432538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.432848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.432858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.433111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.433121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.433408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.433418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.433738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.433748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.434054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.434066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.434371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.434380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.434690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.434699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.434999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.435011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.435396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.435407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.435712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.435721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.436029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.436038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.987 [2024-10-14 14:42:46.436332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.987 [2024-10-14 14:42:46.436342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.987 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.436647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.436656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.436836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.436847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.437181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.437191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.437461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.437470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.437782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.437791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.438075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.438085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.438396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.438405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.438711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.438721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.439003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.439014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.439293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.439303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.439607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.439617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.439949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.439960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.440157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.440167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.440361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.440371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.440561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.440571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.440906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.440917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.441222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.441232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.441419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.441429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.441723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.441733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.442033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.442043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.442326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.442342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.442641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.442650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.442960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.442969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.443210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.443220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.443535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.443544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.443850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.443859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.444152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.444161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.444467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.444477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.444669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.444680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.444883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.444893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.445219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.445229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.445534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.445544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.445828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.445837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.446147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.446157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.446460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.446469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.446748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.446758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.447069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.447081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.447388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.447398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.447683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.447693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.448017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.448027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.988 qpair failed and we were unable to recover it. 00:29:05.988 [2024-10-14 14:42:46.448314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.988 [2024-10-14 14:42:46.448331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.448653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.448663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.448939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.448948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.449279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.449289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.449513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.449523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.449831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.449841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.450148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.450158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.450451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.450460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.450777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.450787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.451092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.451103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.451444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.451454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.451789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.451799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.452080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.452090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.452294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.452304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.452583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.452593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.452869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.452879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.453098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.453108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.453421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.453430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.453765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.453774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.454056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.454078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.454386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.454397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.454531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.454542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.454882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.454892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.455096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.455109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.455437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.455447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.455780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.455789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.456091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.456102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.456374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.456384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.456667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.456677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.456982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.456992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.457304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.457314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.457605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.457615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.457921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.457931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.458235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.458245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.458528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.458545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.989 [2024-10-14 14:42:46.458872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.989 [2024-10-14 14:42:46.458883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.989 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.459186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.459196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.459487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.459498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.459825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.459835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.460120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.460131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.460443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.460452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.460746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.460756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.461122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.461132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.461426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.461444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.461737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.461747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.462049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.462059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.462377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.462388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.462660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.462670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.462983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.462993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.463278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.463289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.463589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.463600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.463911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.463922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.464238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.464248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.464559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.464568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.464754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.464764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.465037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.465047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.465377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.465387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.465684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.465694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.465974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.465984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.466290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.466300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.466518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.466528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.466876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.466885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.467097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.467107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.467419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.467429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.467607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.467619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.467839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.467848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.468175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.468186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.468522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.468533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.468841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.468852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.469148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.469159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.469396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.469407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.469714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.469725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.469997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.470007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.470296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.470307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.470611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.470621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.990 [2024-10-14 14:42:46.470930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.990 [2024-10-14 14:42:46.470941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.990 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.471264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.471274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.471590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.471601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.471909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.471919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.472189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.472200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.472504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.472514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.472815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.472825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.473116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.473127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.473441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.473452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.473763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.473773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.474079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.474091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.474361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.474371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.474671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.474682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.475012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.475022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.475313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.475324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.475625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.475635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.475915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.475928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.476218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.476229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.476426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.476436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.476754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.476764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.477087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.477098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.477401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.477411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.477707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.477718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.478022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.478033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.478207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.478218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.478530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.478540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.478803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.478814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.479000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.479011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.479216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.479227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.479551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.479561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.479877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.479888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.480163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.480174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.480478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.480489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.480684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.480694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.481005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.481015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.481298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.481309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.481602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.481613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.481891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.481902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.482205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.482217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.482519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.482529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.482811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.482821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.991 [2024-10-14 14:42:46.483129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.991 [2024-10-14 14:42:46.483139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.991 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.483438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.483449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.483727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.483737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.484054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.484069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.484245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.484255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.484553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.484563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.484889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.484900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.485237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.485248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.485547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.485558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.485720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.485732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.486005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.486016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.486211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.486222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.486533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.486543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.486851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.486862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.487170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.487180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.487499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.487510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.487825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.487837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.488143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.488153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.488526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.488536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.488815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.488824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.489133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.489144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.489428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.489438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.489664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.489675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.489977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.489988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.490289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.490299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.490605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.490614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.490920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.490929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.491361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.491372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.491667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.491677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.491949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.491959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.492278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.492288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.492513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.492523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.492838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.492848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.493165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.493175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.493501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.493511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.493792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.493801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.494096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.494106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.494412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.494421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.494726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.494736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.495015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.495025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.495251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.495261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.992 qpair failed and we were unable to recover it. 00:29:05.992 [2024-10-14 14:42:46.495528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.992 [2024-10-14 14:42:46.495538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.495700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.495710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.495906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.495919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.496178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.496188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.496520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.496530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.496834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.496844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.497137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.497147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.497476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.497486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.497765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.497775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.498085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.498095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.498382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.498393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.498694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.498703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.499091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.499101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.499406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.499415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.499794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.499804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.500103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.500120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.500286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.500296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.500597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.500606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.500910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.500920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.501224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.501234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.501516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.501526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.501835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.501844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.502137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.502147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.502364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.502374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.502679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.502688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.503020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.503029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.503325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.503334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.503641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.503650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.503934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.503943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.504236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.504246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.504553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.504562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.504889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.504898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.505204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.505215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.505500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.505510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.505739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.505749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.506057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.506070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.993 [2024-10-14 14:42:46.506364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.993 [2024-10-14 14:42:46.506374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.993 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.506651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.506662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.506905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.506915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.507228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.507238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.507524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.507535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.507866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.507877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.508205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.508215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.508488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.508500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.508683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.508693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.509001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.509012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.509366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.509377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.509688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.509699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.510028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.510039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.510226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.510238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.510426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.510437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.510713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.510724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.511024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.511034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.511319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.511330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.511626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.511637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.511838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.511848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.512166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.512177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.512503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.512514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.512811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.512822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.513123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.513134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.513423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.513434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.513756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.513766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.514077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.514088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.514434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.514444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.514741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.514752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.515077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.515089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.515418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.515429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.515760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.515770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.516081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.516092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.516420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.516430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.516721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.516731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.517034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.517044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.517362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.517373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.517651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.517663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.517998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.518008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.518216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.518227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.518536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.518547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.994 [2024-10-14 14:42:46.518742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.994 [2024-10-14 14:42:46.518752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.994 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.518957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.518967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.519343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.519353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.519659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.519669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.519986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.519996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.520392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.520402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.520642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.520652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.520974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.520984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.521285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.521295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.521597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.521608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.521908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.521918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.522232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.522243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.522553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.522563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.522867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.522878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.523180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.523190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.523542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.523552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.523691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.523702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.523954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.523965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.524267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.524277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.524563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.524573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.524884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.524894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.525192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.525203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.525504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.525514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.525814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.525824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.526123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.526134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.526494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.526504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.526773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.526784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.527074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.527084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.527396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.527406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.527716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.527726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.528010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.528020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.528322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.528333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.528628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.528638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.528841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.528851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.529117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.529129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.529439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.529449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.529730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.529740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.529998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.530008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.530216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.530227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.530545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.530555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.530738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.530748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.995 qpair failed and we were unable to recover it. 00:29:05.995 [2024-10-14 14:42:46.531029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.995 [2024-10-14 14:42:46.531039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.531232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.531243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.531543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.531553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.531867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.531878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.532164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.532174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.532461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.532471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.532783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.532794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.533173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.533185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.533509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.533519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.533715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.533725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.534031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.534041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.534364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.534375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.534716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.534726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.535004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.535015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.535356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.535366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.535689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.535700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.536034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.536043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.536318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.536328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.536636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.536646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.536970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.536980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.537337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.537348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.537628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.537638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.537915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.537926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.538229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.538239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.538553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.538563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.538855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.538865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.539144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.539154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.539472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.539481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.539768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.539780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.540108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.540118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.540425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.540435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.540670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.540680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.541001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.541011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.541272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.541282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.541573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.541583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.541872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.541882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.542076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.542087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.542420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.542430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.542776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.542786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.543088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.543099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.543408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.996 [2024-10-14 14:42:46.543419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.996 qpair failed and we were unable to recover it. 00:29:05.996 [2024-10-14 14:42:46.543570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.543581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.543945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.543955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.544237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.544247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.544556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.544566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.544853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.544862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.545144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.545154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.545460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.545471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.545779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.545790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.545922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.545931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.546022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.546032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.546339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.546350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.546684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.546694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.546980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.546990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.547265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.547275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.547576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.547586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.547914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.547924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.548227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.548238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.548558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.548568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.548864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.548882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.549204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.549214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.549517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.549530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.549864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.549874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.550184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.550195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.550510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.550520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.550807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.550816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.550986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.550997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.551211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.551221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.551503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.551513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.551821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.551831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.552132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.552143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.552447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.552458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.552742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.552751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.552974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.552985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.553290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.553300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.553605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.553615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.553937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.553948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.554259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.554270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.554585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.554596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.554898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.554909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.997 qpair failed and we were unable to recover it. 00:29:05.997 [2024-10-14 14:42:46.555187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.997 [2024-10-14 14:42:46.555198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.555511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.555522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.555832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.555842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.556127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.556137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.556455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.556465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.556761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.556770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.556949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.556960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.557298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.557308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.557590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.557600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.557987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.557998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.558291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.558301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.558650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.558660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.558987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.558997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.559192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.559203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.559521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.559531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.559825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.559835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.560140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.560151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.560468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.560478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.560758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.560768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.561074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.561085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.561394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.561404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.561693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.561702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.562007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.562019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.562303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.562314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.562596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.562606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.562876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.562886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.563204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.563214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.563545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.563556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.563885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.563896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.564226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.564236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.564456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.564466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.564776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.564786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.565087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.565097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.565382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.565392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.565702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.565712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.566003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.566013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.566302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.566312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.566499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.566509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.998 [2024-10-14 14:42:46.566816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.998 [2024-10-14 14:42:46.566827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.998 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.567102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.567112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.567395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.567405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.567592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.567603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.567871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.567881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.568195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.568206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.568535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.568545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.568839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.568850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.569175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.569186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.569501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.569511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.569788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.569798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.570127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.570139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.570450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.570459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.570739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.570749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.571054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.571077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.571382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.571392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.571562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.571572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.571847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.571857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.572175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.572185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.572464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.572474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.572807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.572817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.573119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.573129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.573463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.573473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.573783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.573793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.574103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.574114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.574292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.574302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.574674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.574684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.574942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.574951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.575144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.575154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.575441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.575451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.575755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.575765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.576075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.576085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.576466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.576475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.576787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.576797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.577106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.577116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.577433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.577443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.577821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.577831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.578117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.578127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.578446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.578456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.578758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.578768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.578962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.578972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:05.999 [2024-10-14 14:42:46.579165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.999 [2024-10-14 14:42:46.579176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:05.999 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.579461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.579471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.579766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.579785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.580095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.580105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.580409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.580419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.580707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.580717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.581015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.581025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.581317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.581326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.581610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.581619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.581923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.581932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.582214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.582225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.582596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.582607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.582912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.582922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.583222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.583233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.583410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.583419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.583768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.583779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.584084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.584095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.584376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.584386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.584562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.584573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.584894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.584904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.585192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.585202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.585520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.585530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.585814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.585823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.586020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.586030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.586335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.586346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.586662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.586672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.586941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.586950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.587242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.587252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.587568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.587577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.587865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.587875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.588187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.588197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.588503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.588513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.588842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.588851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.589045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.589055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.589364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.589374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.589601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.589610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.589882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.589892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.590195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.590205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.590531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.590543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.590848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.590857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.591139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.591149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.000 qpair failed and we were unable to recover it. 00:29:06.000 [2024-10-14 14:42:46.591492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.000 [2024-10-14 14:42:46.591502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.591736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.591747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.592059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.592074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.592405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.592416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.592721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.592731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.593035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.593045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.593331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.593342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.593647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.593657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.593966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.593975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.594159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.594171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.594514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.594524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.594839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.594858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.595155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.595165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.595467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.595477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.595787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.595797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.596124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.596135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.596440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.596449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.596763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.596772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.597098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.597108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.597387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.597397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.597708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.597718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.598007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.598016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.598322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.598332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.598636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.598645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.598936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.598945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.599238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.599248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.599569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.599579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.599864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.599882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.600254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.600264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.600579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.600589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.600876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.600886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.601200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.601210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.601388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.601398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.601751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.601760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.601966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.601976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.602174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.602184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.602524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.602533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.602839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.602849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.603178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.603191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.603480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.603491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.603794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.603803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.001 qpair failed and we were unable to recover it. 00:29:06.001 [2024-10-14 14:42:46.603994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.001 [2024-10-14 14:42:46.604004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.604238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.604248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.604557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.604566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.604873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.604883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.605212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.605222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.605487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.605496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.605754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.605764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.606091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.606102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.606400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.606410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.606708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.606718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.607029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.607038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.607420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.607431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.607735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.607744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.608035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.608045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.608428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.608439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.608744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.608754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.609080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.609090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.609393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.609402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.609708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.609718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.610003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.610012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.610263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.610273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.610676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.610686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.610967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.610977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.611255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.611265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.611572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.611581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.611861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.611871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.612180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.612190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.612495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.612504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.612785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.612794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.613103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.613113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.613438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.613447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.613737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.613746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.614075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.614086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.614449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.614459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.614749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.614759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.615015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.615025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.002 qpair failed and we were unable to recover it. 00:29:06.002 [2024-10-14 14:42:46.615305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.002 [2024-10-14 14:42:46.615317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.615652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.615662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.615968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.615979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.616237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.616247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.616530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.616540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.616737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.616746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.616938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.616948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.617230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.617240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.617417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.617427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.617723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.617733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.618044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.618054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.618398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.618408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.618602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.618611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.618896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.618906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.619212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.619222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.619526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.619536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.619869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.619879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.620040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.620051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.620432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.620442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.620725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.620735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.621039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.621048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.621332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.621342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.621624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.621634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.621936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.621945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.622236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.622246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.622536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.622546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.622858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.622868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.623178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.623189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.623491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.623501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.623806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.623818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.624121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.624131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.624399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.624409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.624729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.624739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.625012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.625021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.625375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.625385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.625692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.625702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.626034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.626044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.626466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.626476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.626758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.626767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.627074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.627084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.627370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.627380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.003 qpair failed and we were unable to recover it. 00:29:06.003 [2024-10-14 14:42:46.627687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.003 [2024-10-14 14:42:46.627698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.628004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.628015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.628349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.628359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.628661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.628671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.628964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.628975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.629254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.629265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.629596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.629607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.629913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.629924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.630230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.630241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.630551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.630561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.630745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.630755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.631031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.631040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.631354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.631364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.631669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.631678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.631982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.631992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.632222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.632232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.632537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.632547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.632816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.632826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.633140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.633150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.633432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.633442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.633719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.633729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.634017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.634027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.634215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.634226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.634526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.634536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.634844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.634854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.635164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.635174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.635456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.635465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.635774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.635784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.635967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.635976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.636276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.636286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.636609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.636619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.636814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.636824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.637139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.637149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.637460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.637470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.637774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.637783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.638065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.638075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.638366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.638375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.638701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.638711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.639036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.639046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.639380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.639390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.639726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.004 [2024-10-14 14:42:46.639736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.004 qpair failed and we were unable to recover it. 00:29:06.004 [2024-10-14 14:42:46.639957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.639966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.640245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.640256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.640600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.640610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.640890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.640899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.641208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.641219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.641523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.641534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.641863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.641873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.642179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.642189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.642489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.642499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.642801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.642810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.643130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.643140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.643444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.643454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.643734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.643744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.644073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.644083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.644471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.644481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.644798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.644810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.645000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.645010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.645321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.645331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.645614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.645624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.645921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.645931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.646233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.646243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.646413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.646424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.646697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.646706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.647010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.647020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.647315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.647325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.647629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.647639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.647946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.647956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.648230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.648240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.648555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.648565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.648865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.648875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.649164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.649174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.649479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.649489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.649794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.649803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.650149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.650159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.650468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.650478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.650783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.650793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.651075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.651086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.651429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.651439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.651642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.651651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.651933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.651943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.652107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.005 [2024-10-14 14:42:46.652118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.005 qpair failed and we were unable to recover it. 00:29:06.005 [2024-10-14 14:42:46.652496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.652506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.652792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.652802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.653080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.653090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.653371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.653381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.653661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.653671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.653979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.653988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.654305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.654315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.654594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.654604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.654905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.654915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.655106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.655116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.655459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.655468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.655781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.655791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.655981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.655991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.656301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.656311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.656617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.656627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.656930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.656942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.657317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.657327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.657634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.657643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.657948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.657959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.658213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.658223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.658535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.658545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.658849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.658860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.659138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.659148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.659340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.659349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.659662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.659673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.659977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.659987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.660290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.660300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.660637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.660647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.660958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.660967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.661293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.661304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.661501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.661510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.661827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.661837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.662146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.662156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.662460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.662470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.662685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.662695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.662977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.662987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.006 qpair failed and we were unable to recover it. 00:29:06.006 [2024-10-14 14:42:46.663323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.006 [2024-10-14 14:42:46.663333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.663642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.663651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.663967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.663977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.664261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.664271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.664590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.664600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.664962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.664972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.665141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.665155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.665328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.665338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.665626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.665635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.665969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.665979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.666209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.666219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.666516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.666526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.666728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.666738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.667043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.667054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.667367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.667378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.667682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.667692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.668048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.668059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.668359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.668370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.668674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.668685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.668884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.668894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.669212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.669223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.669502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.669512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.669790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.669799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.669965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.669975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.670247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.670257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.670566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.670576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.670891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.670901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.671183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.671193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.671529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.671539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.671834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.671844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.672028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.672039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.672326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.672337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.672643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.672652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.672960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.672970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.673280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.673290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.673592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.673602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.673810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.673820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.674172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.674182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.674379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.674388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.674746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.674756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.007 qpair failed and we were unable to recover it. 00:29:06.007 [2024-10-14 14:42:46.675069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.007 [2024-10-14 14:42:46.675079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.675380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.675390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.675695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.675704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.675978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.675987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.676296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.676306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.676618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.676628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.676957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.676967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.677254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.677266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.677575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.677585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.677909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.677918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.678225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.678235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.678536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.678546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.678824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.678834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.679131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.679141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.679454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.679464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.679745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.679755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.680072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.680083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.680392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.680401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.680692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.680702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.681006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.681016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.681327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.681337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.681622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.681639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.681950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.681960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.682252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.682262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.682571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.682581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.682847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.682857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.683165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.683176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.683445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.683455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.683793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.683803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.684109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.684119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.684449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.684460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.684646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.684657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.684974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.684985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.685275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.685285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.685567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.685579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.685885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.685894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.686178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.686188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.686513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.686522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.686712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.686721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.687047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.687057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.687391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.008 [2024-10-14 14:42:46.687401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.008 qpair failed and we were unable to recover it. 00:29:06.008 [2024-10-14 14:42:46.687706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.687715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.688041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.688052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.688375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.688386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.688689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.688699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.689038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.689048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.689360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.689371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.689670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.689681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.690015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.690026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.690360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.690371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.690713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.690724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.691006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.691017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.691352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.691363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.691659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.691670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.691946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.691957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.692229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.692240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.692507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.692517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.692841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.692852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.693153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.693164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.693461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.693471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.693749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.693760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.694026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.694036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.694358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.694369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.694457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.694468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.694748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.694759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.695099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.695112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.695457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.695467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.695771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.695780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.696048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.696058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.696377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.696387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.696669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.696678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.696984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.696994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.697282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.697300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.697609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.697619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.697910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.697920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.698236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.698251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.698632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.698641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.698915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.698925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.699125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.699135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.699462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.699473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.699798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.699809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.700095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.009 [2024-10-14 14:42:46.700105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.009 qpair failed and we were unable to recover it. 00:29:06.009 [2024-10-14 14:42:46.700437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.010 [2024-10-14 14:42:46.700447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.010 qpair failed and we were unable to recover it. 00:29:06.010 [2024-10-14 14:42:46.700716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.010 [2024-10-14 14:42:46.700726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.010 qpair failed and we were unable to recover it. 00:29:06.010 [2024-10-14 14:42:46.700937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.010 [2024-10-14 14:42:46.700948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.010 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.701134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.701146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.701357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.701368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.701641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.701651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.701958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.701968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.702191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.702202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.702512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.702522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.702697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.702706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.702984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.702994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.703177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.703188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.703467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.703476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.703784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.703793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.704075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.704086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.704397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.704406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.704673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.704683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.704964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.704973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.705277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.705287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.705612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.705621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.705908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.705920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.706225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.706235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.706541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.706552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.285 [2024-10-14 14:42:46.706854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.285 [2024-10-14 14:42:46.706864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.285 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.707168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.707179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.707459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.707469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.707792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.707803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.708106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.708116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.708420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.708430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.708739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.708749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.709068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.709078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.709396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.709406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.709691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.709702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.710005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.710015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.710318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.710329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.710574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.710584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.710901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.710911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.711219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.711229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.711522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.711531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.711833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.711842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.712124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.712134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.712431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.712441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.712746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.712755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.713069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.713079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.713404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.713414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.713703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.713713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.714025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.714035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.714346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.714357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.714668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.714678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.714965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.714976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.715256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.715267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.715576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.715586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.715817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.715827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.716115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.716125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.716402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.716412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.716723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.716733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.717019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.717029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.717343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.717353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.717670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.717680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.717966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.717976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.718311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.718322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.718567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.718579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.718880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.286 [2024-10-14 14:42:46.718890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.286 qpair failed and we were unable to recover it. 00:29:06.286 [2024-10-14 14:42:46.719193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.719203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.719529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.719539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.719891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.719901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.720213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.720223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.720541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.720552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.720828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.720838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.721146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.721157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.721452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.721462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.721738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.721748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.722016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.722025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.722347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.722357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.722644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.722654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.723037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.723047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.723257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.723267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.723456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.723467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.723815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.723825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.724128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.724139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.724447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.724456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.724738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.724748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.725053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.725067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.725267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.725278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.725578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.725588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.725790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.725800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.726125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.726135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.726429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.726438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.726747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.726756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.727046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.727056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.727342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.727353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.727542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.727553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.727841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.727851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.728157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.728167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.728471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.728481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.728762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.728772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.729054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.729066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.729366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.729376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.729670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.729680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.729866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.729877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.730195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.730206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.730489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.730500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.730838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.730848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.287 qpair failed and we were unable to recover it. 00:29:06.287 [2024-10-14 14:42:46.731054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.287 [2024-10-14 14:42:46.731066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.731357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.731366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.731684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.731694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.732003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.732013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.732215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.732225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.732508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.732518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.732841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.732851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.733208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.733218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.733503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.733512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.733820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.733829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.734117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.734128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.734425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.734435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.734745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.734755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.735042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.735052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.735346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.735357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.735683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.735692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.735875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.735886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.736280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.736291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.736561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.736570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.736739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.736749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.736981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.736992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.737364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.737374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.737653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.737662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.737874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.737884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.738135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.738146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.738503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.738513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.738901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.738913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.739249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.739260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.739575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.739586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.739830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.739840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.740148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.740158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.740443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.740453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.740762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.740772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.741059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.741072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.741423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.741433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.741743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.741753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.742040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.742051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.742356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.742367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.742672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.742683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.742887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.742897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.743306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.288 [2024-10-14 14:42:46.743317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.288 qpair failed and we were unable to recover it. 00:29:06.288 [2024-10-14 14:42:46.743599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.743608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.743842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.743852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.744154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.744164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.744480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.744489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.744804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.744813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.745106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.745116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.745314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.745323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.745692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.745703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.746031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.746041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.746356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.746366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.746670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.746679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.746965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.746974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.747254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.747264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.747573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.747583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.747874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.747884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.748068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.748080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.748435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.748445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.748738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.748748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.749056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.749070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.749353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.749363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.749703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.749713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.750000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.750010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.750283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.750293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.750595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.750604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.750915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.750924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.751234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.751244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.751579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.751591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.751912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.751923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.752237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.752248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.752572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.752583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.752887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.752897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.753164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.753175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.753474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.753484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.753790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.289 [2024-10-14 14:42:46.753800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.289 qpair failed and we were unable to recover it. 00:29:06.289 [2024-10-14 14:42:46.754104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.754115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.754432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.754442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.754727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.754737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.754921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.754932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.755235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.755246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.755515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.755525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.755806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.755815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.756122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.756132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.756307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.756319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.756612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.756622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.756813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.756824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.757000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.757011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.757396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.757408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.757692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.757702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.757964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.757974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.758245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.758256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.758459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.758468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.758833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.758844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.759071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.759081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.759431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.759448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.759647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.759658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.759980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.759991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.760271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.760282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.760578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.760589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.760902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.760912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.761203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.761213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.761525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.761536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.761846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.761856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.762184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.762195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.762573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.762584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.762847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.762856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.763217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.763227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.763446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.763456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.763664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.763674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.763974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.763984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.764306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.764317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.764598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.764608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.764888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.764898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.765256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.765266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.765558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.765568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.290 qpair failed and we were unable to recover it. 00:29:06.290 [2024-10-14 14:42:46.765911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.290 [2024-10-14 14:42:46.765921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.766234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.766245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.766563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.766573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.766899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.766909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.767201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.767212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.767503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.767513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.767712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.767722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.768054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.768069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.768378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.768388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.768674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.768684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.768989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.768999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.769322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.769333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.769662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.769673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.769966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.769976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.770277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.770287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.770676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.770686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.770999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.771008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.771298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.771310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.771595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.771605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.771886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.771896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.772264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.772276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.772602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.772612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.772890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.772900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.773247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.773258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.773547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.773556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.773849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.773860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.774165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.774175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.774465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.774475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.774787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.774798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.774962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.774974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.775259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.775269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.775588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.775597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.775905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.775915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.776245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.776257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.776572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.776583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.776891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.776901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.777182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.777192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.777462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.777472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.777789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.777799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.778142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.778152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.778453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.291 [2024-10-14 14:42:46.778464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.291 qpair failed and we were unable to recover it. 00:29:06.291 [2024-10-14 14:42:46.778780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.778790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.779075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.779086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.779401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.779412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.779716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.779726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.780014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.780023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.780396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.780406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.780706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.780718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.781025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.781035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.781351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.781362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.781643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.781653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.781933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.781944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.782222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.782232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.782538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.782549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.782847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.782857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.783165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.783176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.783459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.783470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.783758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.783768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.784091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.784102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.784363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.784374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.784676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.784685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.785043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.785054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.785381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.785392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.785671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.785682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.785985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.785995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.786304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.786315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.786606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.786617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.786928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.786938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.787233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.787243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.787541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.787551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.787834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.787845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.788148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.788158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.788543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.788553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.788863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.788873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.789097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.789107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.789481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.789491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.789776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.789786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.790118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.790129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.790452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.790463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.790768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.790779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.791085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.791097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.791463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.292 [2024-10-14 14:42:46.791473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.292 qpair failed and we were unable to recover it. 00:29:06.292 [2024-10-14 14:42:46.791773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.791783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.792069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.792079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.792355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.792365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.792680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.792690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.792976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.792986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.793296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.793307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.793621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.793633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.793920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.793930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.794209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.794220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.794547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.794558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.794865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.794877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.795156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.795168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.795463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.795474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.795780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.795790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.796080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.796091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.796387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.796397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.796709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.796720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.797000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.797010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.797313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.797324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.797665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.797675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.798016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.798026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.798356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.798366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.798584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.798594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.798928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.798938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.799243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.799254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.799468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.799478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.799819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.799829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.800142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.800153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.800365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.800377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.800695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.800705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.801014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.801024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.801346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.801356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.801648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.801658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.801960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.801972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.802282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.802293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.802579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.802589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.802775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.802786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.802982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.802993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.803273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.803285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.803608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.803620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.293 [2024-10-14 14:42:46.803924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.293 [2024-10-14 14:42:46.803935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.293 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.804223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.804234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.804550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.804559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.804861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.804872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.805186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.805196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.805513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.805523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.805802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.805812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.806108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.806120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.806408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.806418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.806724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.806734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.806926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.806938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.807296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.807306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.807601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.807612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.807891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.807900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.808215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.808225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.808537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.808547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.808842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.808852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.809126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.809137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.809416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.809426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.809615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.809625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.809995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.810006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.810299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.810310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.810595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.810605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.810909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.810919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.811233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.811243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.811537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.811547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.811719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.811730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.812011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.812020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.812316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.812326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.812636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.812646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.812965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.812975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.813264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.813275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.813581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.813591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.813996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.814006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.814290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.814302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.814698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.814709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.294 [2024-10-14 14:42:46.815009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.294 [2024-10-14 14:42:46.815020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.294 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.815344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.815354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.815655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.815665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.816057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.816072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.816367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.816376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.816692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.816702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.816907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.816917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.817232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.817242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.817532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.817542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.817847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.817857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.818152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.818163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.818451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.818461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.818666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.818676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.819089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.819099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.819315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.819325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.819671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.819681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.819883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.819893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.820085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.820095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.820410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.820420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.820705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.820715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.820921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.820932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.821160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.821170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.821351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.821363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.821628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.821637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.821821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.821831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.822135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.822146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.822435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.822445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.822736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.822746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.823159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.823171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.823388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.823398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.823687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.823696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.824060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.824074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.824265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.824275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.824482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.824494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.824830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.824841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.825149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.825159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.825492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.825502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.825787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.825797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.826104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.826115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.826429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.826439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.826722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.826733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.295 qpair failed and we were unable to recover it. 00:29:06.295 [2024-10-14 14:42:46.827042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.295 [2024-10-14 14:42:46.827053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.827272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.827283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.827651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.827662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.827872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.827883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.828203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.828214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.828496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.828506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.828827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.828837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.829139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.829149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.829445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.829455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.829758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.829768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.830051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.830061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.830438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.830449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.830752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.830763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.831071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.831082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.831423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.831433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.831745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.831755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.832043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.832053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.832402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.832412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.832717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.832727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.833010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.833019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.833299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.833310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.833615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.833626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.833927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.833937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.834259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.834269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.834575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.834585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.834878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.834890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.835165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.835176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.835489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.835499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.835818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.835828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.836142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.836153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.836463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.836474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.836797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.836807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.837110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.837121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.837425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.837435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.837720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.837730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.838014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.838024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.838293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.838304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.838615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.838625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.838951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.838961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.839278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.839289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.839595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.296 [2024-10-14 14:42:46.839606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.296 qpair failed and we were unable to recover it. 00:29:06.296 [2024-10-14 14:42:46.839943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.839954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.840234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.840244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.840443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.840453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.840768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.840778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.841082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.841092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.841377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.841386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.841572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.841582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.841915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.841924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.842207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.842218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.842400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.842409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.842747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.842756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.843061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.843076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.843389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.843398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.843571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.843580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.843890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.843899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.844058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.844073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.844343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.844353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.844655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.844665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.844948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.844957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.845270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.845280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.845594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.845605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.845883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.845893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.846195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.846205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.846404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.846414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.846743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.846753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.846956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.846966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.847309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.847319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.847616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.847625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.847826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.847836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.848014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.848024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.848227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.848236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.848586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.848596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.848883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.848893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.849165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.849176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.849484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.849494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.849778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.849788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.849979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.849991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.850264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.850274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.850578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.850588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.850875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.850885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.851189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.297 [2024-10-14 14:42:46.851199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.297 qpair failed and we were unable to recover it. 00:29:06.297 [2024-10-14 14:42:46.851393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.851403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.851714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.851723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.852037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.852048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.852338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.852347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.852718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.852729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.853030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.853041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.853349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.853359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.853665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.853676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.854007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.854018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.854193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.854203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.854506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.854517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.854834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.854847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.855156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.855167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.855470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.855479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.855796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.855806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.856087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.856098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.856487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.856497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.856774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.856785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.857066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.857077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.857283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.857293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.857656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.857666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.857939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.857949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.858281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.858292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.858594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.858604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.858883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.858893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.859078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.859090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.859483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.859493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.859768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.859778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.860048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.860058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.860266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.860276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.860606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.860616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.860944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.860954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.861365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.861375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.861656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.861667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.861945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.861956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.862230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.298 [2024-10-14 14:42:46.862240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.298 qpair failed and we were unable to recover it. 00:29:06.298 [2024-10-14 14:42:46.862587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.862598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.862802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.862812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.863123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.863133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.863411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.863421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.863833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.863842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.864125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.864134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.864434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.864443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.864829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.864840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.865020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.865030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.865352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.865362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.865704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.865714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.866020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.866030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.866332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.866343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.866643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.866652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.866961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.866971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.867276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.867287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.867526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.867540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.867861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.867872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.868186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.868197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.868509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.868520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.868820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.868831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.869020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.869031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.869320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.869330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.869631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.869640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.869950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.869960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.870239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.870249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.870557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.870567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.870909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.870919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.871218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.871228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.871536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.871546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.871823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.871834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.872142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.872153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.872461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.872472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.872779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.872789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.873076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.873087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.873393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.873406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.873689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.873699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.873886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.873896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.874220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.874231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.874522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.299 [2024-10-14 14:42:46.874532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.299 qpair failed and we were unable to recover it. 00:29:06.299 [2024-10-14 14:42:46.874805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.874815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.875096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.875107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.875408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.875418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.875708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.875722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.876025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.876036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.876346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.876357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.876640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.876650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.876957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.876967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.877253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.877263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.877620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.877630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.877933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.877944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.878235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.878246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.878550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.878561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.878863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.878873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.879190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.879201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.879487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.879498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.879794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.879804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.879990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.880002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.880325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.880336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.880639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.880650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.880971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.880981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.881286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.881297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.881597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.881607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.881906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.881916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.882207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.882217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.882494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.882507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.882812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.882822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.883116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.883126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.883422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.883432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.883740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.883750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.883936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.883946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.884242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.884252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.884555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.884565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.884880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.884890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.885176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.885186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.885376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.885386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.885739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.885750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.886058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.886084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.886288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.886297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.886609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.300 [2024-10-14 14:42:46.886619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.300 qpair failed and we were unable to recover it. 00:29:06.300 [2024-10-14 14:42:46.886924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.886933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.887236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.887246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.887618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.887627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.887934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.887944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.888238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.888250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.888436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.888446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.888660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.888670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.888977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.888987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.889202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.889212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.889422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.889431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.889811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.889821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.890115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.890125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.890400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.890410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.890716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.890725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.891143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.891154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.891358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.891368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.891616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.891626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.891930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.891940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.892132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.892142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.892438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.892448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.892772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.892782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.893088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.893098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.893381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.893390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.893668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.893678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.893994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.894004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.894196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.894206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.894542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.894552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.894858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.894867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.895155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.895165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.895452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.895462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.895716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.895725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.896039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.896051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.896342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.896352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.896546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.896562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.896897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.896906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.897032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.897042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.897360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.897370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.897606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.897616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.897921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.897931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.898095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.898106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.898415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.301 [2024-10-14 14:42:46.898425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.301 qpair failed and we were unable to recover it. 00:29:06.301 [2024-10-14 14:42:46.898732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.898742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.899050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.899059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.899369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.899379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.899721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.899730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.900032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.900043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.900365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.900376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.900678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.900687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.900990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.900999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.901313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.901323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.901603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.901614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.901801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.901812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.902119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.902129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.902440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.902449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.902758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.902768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.903081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.903092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.903416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.903426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.903735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.903745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.904030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.904039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.904334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.904344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.904659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.904669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.904971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.904980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.905289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.905299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.905602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.905611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.905925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.905935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.906219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.906229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.906536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.906546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.906857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.906867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.907199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.907209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.907491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.907500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.907795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.907805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.908112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.908122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.908435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.908447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.908729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.908739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.909046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.909055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.909377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.909387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.302 [2024-10-14 14:42:46.909690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.302 [2024-10-14 14:42:46.909701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.302 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.909986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.909996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.910271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.910281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.910585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.910595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.910875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.910885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.911046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.911056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.911367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.911377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.911654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.911664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.911951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.911961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.912258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.912268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.912556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.912567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.912872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.912883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.913194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.913204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.913492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.913501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.913812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.913822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.914128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.914138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.914474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.914484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.914785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.914795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.915111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.915122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.915419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.915428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.915642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.915652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.915933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.915943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.916270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.916279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.916637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.916649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.916836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.916846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.917153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.917163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.917351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.917369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.917675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.917685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.917978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.917987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.918292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.918303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.918609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.918620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.918947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.918958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.919196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.919206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.919256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.919266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.919548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.919558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.919836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.919845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.920155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.920166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.920477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.920487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.920768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.920778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.920972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.920982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.921335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.921345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.303 [2024-10-14 14:42:46.921743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.303 [2024-10-14 14:42:46.921752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.303 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.922037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.922046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.922422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.922432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.922734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.922744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.923075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.923086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.923417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.923426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.923709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.923719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.924038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.924048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.924332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.924342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.924621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.924632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.924940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.924950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.925278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.925288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.925571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.925581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.925899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.925909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.926235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.926245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.926596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.926607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.926879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.926888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.927193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.927203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.927506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.927516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.927828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.927838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.928127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.928137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.928456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.928466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.928647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.928657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.929019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.929032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.929331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.929341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.929529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.929546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.929878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.929888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.930163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.930173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.930477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.930486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.930803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.930813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.931134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.931144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.931302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.931313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.931620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.931630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.931914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.931923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.932237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.932247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.932529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.932539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.932866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.932876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.933076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.933087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.933405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.933415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.933701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.933711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.934004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.934014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.304 qpair failed and we were unable to recover it. 00:29:06.304 [2024-10-14 14:42:46.934341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.304 [2024-10-14 14:42:46.934351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.934646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.934656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.934976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.934986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.935158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.935169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.935430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.935439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.935744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.935754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.936081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.936092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.936373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.936383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.936575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.936585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.936886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.936895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.937185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.937195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.937473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.937483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.937802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.937812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.938068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.938078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.938468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.938478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.938758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.938768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.939047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.939058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.939226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.939237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.939566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.939577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.939866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.939876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.940178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.940188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.940521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.940531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.940813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.940823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.941138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.941148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.941422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.941431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.941745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.941755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.942058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.942072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.942383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.942392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.942762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.942772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.943078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.943088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.943390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.943400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.943686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.943695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.943998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.944008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.944317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.944328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.944617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.944628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.944936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.944946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.945202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.945212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.945405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.945415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.945711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.945721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.945926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.945936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.946265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.305 [2024-10-14 14:42:46.946275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.305 qpair failed and we were unable to recover it. 00:29:06.305 [2024-10-14 14:42:46.946546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.946555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.946996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.947005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.947291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.947301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.947541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.947551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.947779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.947789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.947967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.947977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.948290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.948301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.948594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.948605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.948898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.948909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.949238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.949251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.949558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.949567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.949864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.949873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.950178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.950188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.950421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.950431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.950634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.950643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.950860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.950870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.951234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.951244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.951574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.951584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.951773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.951782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.952088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.952098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.952479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.952488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.952793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.952803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.953059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.953077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.953362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.953372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.953679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.953688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.953999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.954010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.954340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.954350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.954656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.954667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.954974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.954983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.955283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.955293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.955567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.955576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.955895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.955904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.956095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.956105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.956278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.956289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.956600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.956609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.956894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.956903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.957215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.957226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.957530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.957541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.957821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.957831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.958097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.958107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.958453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.306 [2024-10-14 14:42:46.958462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.306 qpair failed and we were unable to recover it. 00:29:06.306 [2024-10-14 14:42:46.958770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.958779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.959043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.959052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.959341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.959351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.959653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.959663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.959821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.959832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.960127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.960138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.960463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.960474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.960653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.960663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.960984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.960995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.961364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.961377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.961677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.961686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.961998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.962007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.962198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.962208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.962567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.962577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.962862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.962872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.963156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.963167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.963477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.963487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.963767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.963777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.964103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.964113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.964412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.964422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.964603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.964613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.964972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.964982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.965291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.965301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.965609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.965619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.965804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.965814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.966079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.966090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.966393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.966403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.966796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.966805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.967120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.967130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.967444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.967453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.967810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.967819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.968024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.968033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.968308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.968318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.968479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.968488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.968780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.968790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.307 [2024-10-14 14:42:46.969099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.307 [2024-10-14 14:42:46.969109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.307 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.969307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.969318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.969687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.969696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.969969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.969978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.970289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.970299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.970605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.970615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.970813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.970823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.971110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.971119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.971434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.971445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.971654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.971664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.972032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.972042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.972234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.972246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.972564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.972575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.972854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.972864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.973045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.973056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.973331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.973342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.973696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.973706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.973993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.974003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.974152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.974163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.974481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.974490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.974840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.974850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.975035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.975045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.975274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.975284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.975577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.975588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.975890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.975901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.976201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.976211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.976503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.976514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.976820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.976831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.977134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.977144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.977441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.977450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.977758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.977767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.978071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.978081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.978372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.978381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.978685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.978695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.979003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.979013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.979314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.979324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.979630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.979639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.979956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.979966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.980293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.980303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.980605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.980614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.980898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.308 [2024-10-14 14:42:46.980907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.308 qpair failed and we were unable to recover it. 00:29:06.308 [2024-10-14 14:42:46.981188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.981198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.981507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.981519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.981806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.981816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.982055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.982070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.982361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.982371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.982675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.982685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.982976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.982986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.983306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.983316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.983623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.983633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.983952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.983961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.984247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.984257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.984570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.984580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.984868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.984878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.985187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.985197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.985502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.985513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.985807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.985817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.986146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.986156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.986458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.986468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.986756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.986766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.987075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.987085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.987373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.987383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.987564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.987574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.987904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.987915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.988208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.988218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.988517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.988526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.988808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.988817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.989132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.989142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.989447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.989456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.989770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.989781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.990091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.990101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.990388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.990398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.990678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.990687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.990957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.990967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.991272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.991282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.991603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.991614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.992011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.992022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.992357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.992367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.992672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.992681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.992973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.992983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.993278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.993288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.993598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.309 [2024-10-14 14:42:46.993607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.309 qpair failed and we were unable to recover it. 00:29:06.309 [2024-10-14 14:42:46.993881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.993891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.994201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.994211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.994523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.994532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.994804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.994815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.995146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.995157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.995466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.995477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.995783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.995794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.996130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.996140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.996465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.996474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.996757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.996767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.997103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.997113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.997419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.997429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.997711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.997722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.998060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.998074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.998375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.998385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.998692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.998703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.999000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.999009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.999296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.999306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.999622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.999631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:46.999920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:46.999930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:47.000365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:47.000375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:47.000580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:47.000590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:47.000878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:47.000887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:47.001090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:47.001100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.310 [2024-10-14 14:42:47.001362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.310 [2024-10-14 14:42:47.001373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.310 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-14 14:42:47.001671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-14 14:42:47.001684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-14 14:42:47.002025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-14 14:42:47.002036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-14 14:42:47.002336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-14 14:42:47.002346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-14 14:42:47.002641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-14 14:42:47.002653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-14 14:42:47.003042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-14 14:42:47.003051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-14 14:42:47.003256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-14 14:42:47.003266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-14 14:42:47.003577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-14 14:42:47.003586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-14 14:42:47.003884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-14 14:42:47.003893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-14 14:42:47.004216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-14 14:42:47.004226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-14 14:42:47.004569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-14 14:42:47.004581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-14 14:42:47.004793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-14 14:42:47.004803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-14 14:42:47.004983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-14 14:42:47.004993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.588 [2024-10-14 14:42:47.005284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.588 [2024-10-14 14:42:47.005295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.588 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.005596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.005606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.005888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.005898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.006185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.006196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.006390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.006400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.006675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.006685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.006964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.006974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.007165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.007175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.007512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.007522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.007742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.007752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.008069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.008079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.008337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.008347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.008678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.008687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.008960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.008970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.009247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.009257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.009559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.009569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.009873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.009882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.010167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.010177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.010480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.010493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.010780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.010790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.010997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.011008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.011290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.011300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.011610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.011619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.011903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.011913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.012192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.012202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.012514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.012524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.012792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.012801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.013105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.013115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.013428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.013438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.013747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.013758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.014046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.014056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.014432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.014443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.014740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.014750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.015040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.015049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.015347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.015357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.015674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.015684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.015992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.016002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.016290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.016300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.016618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.016628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.016907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.016917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.017240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.017251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.589 qpair failed and we were unable to recover it. 00:29:06.589 [2024-10-14 14:42:47.017559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.589 [2024-10-14 14:42:47.017569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.017848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.017857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.018167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.018177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.018526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.018536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.018847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.018857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.019155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.019165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.019469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.019478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.019797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.019807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.020117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.020128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.020433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.020443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.020748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.020757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.021040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.021049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.021353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.021364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.021651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.021661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.021943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.021952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.022279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.022289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.022584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.022594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.022874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.022885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.023198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.023210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.023517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.023528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.023833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.023843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.024140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.024150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.024429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.024439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.024744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.024754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.025067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.025077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.025376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.025386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.025695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.025704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.026022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.026032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.026328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.026339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.026623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.026633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.026939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.026949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.027244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.027254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.027553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.027562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.027836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.027846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.028048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.028057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.028339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.028349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.028650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.028659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.028831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.028842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.029169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.029179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.029486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.029496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.029787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.029797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.590 qpair failed and we were unable to recover it. 00:29:06.590 [2024-10-14 14:42:47.030077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.590 [2024-10-14 14:42:47.030088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.030378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.030388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.030694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.030703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.031010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.031020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.031312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.031324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.031640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.031650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.031956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.031966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.032276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.032286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.032557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.032567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.032889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.032899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.033130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.033140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.033393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.033403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.033564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.033575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.033846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.033856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.034160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.034171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.034464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.034473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.034814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.034823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.035039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.035049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.035367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.035377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.035680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.035690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.035983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.035993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.036204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.036215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.036437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.036447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.036743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.036752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.037049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.037059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.037379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.037389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.037692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.037702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.038015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.038025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.038334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.038345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.038653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.038663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.038966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.038977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.039297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.039308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.039652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.039661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.039970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.039979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.040252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.040262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.040543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.040553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.040862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.040872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.041191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.041201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.041610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.041620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.041824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.041835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.042116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.591 [2024-10-14 14:42:47.042126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.591 qpair failed and we were unable to recover it. 00:29:06.591 [2024-10-14 14:42:47.042436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.042445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.042760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.042769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.043164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.043174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.043467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.043477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.043744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.043755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.044100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.044111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.044398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.044408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.044711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.044721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.044890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.044902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.045230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.045241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.045548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.045558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.045859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.045869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.046179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.046189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.046497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.046507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.046555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.046566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.046874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.046884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.047191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.047202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.047448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.047457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.047674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.047684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.047852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.047863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.048158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.048169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.048483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.048493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.048794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.048805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.049124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.049134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.049456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.049467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.049621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.049631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.049905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.049915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.050243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.050253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.050559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.050568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.050883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.050892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.051202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.051212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.051503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.051513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.051796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.051806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.052109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.052119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.052422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.052431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.052632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.052643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.592 qpair failed and we were unable to recover it. 00:29:06.592 [2024-10-14 14:42:47.052976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.592 [2024-10-14 14:42:47.052986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.053289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.053299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.053603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.053613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.053896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.053906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.054225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.054235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.054548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.054558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.054840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.054850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.055160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.055171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.055333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.055344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.055684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.055695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.056001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.056011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.056297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.056307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.056610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.056619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.056804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.056814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.057174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.057184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.057473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.057482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.057806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.057815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.058104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.058115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.058409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.058419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.058708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.058718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.059017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.059027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.059392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.059402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.059691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.059700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.059982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.059991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.060301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.060311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.060625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.060634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.060915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.060925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.061210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.061220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.061539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.061549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.061903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.061914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.062221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.062232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.062579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.062589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.062871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.062881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.063175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.063185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.063495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.063504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.063675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.063685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.063963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.063975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.064187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.064197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.064514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.064524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.064811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.593 [2024-10-14 14:42:47.064820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.593 qpair failed and we were unable to recover it. 00:29:06.593 [2024-10-14 14:42:47.065113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.065124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.065425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.065435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.065747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.065756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.066045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.066055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.066352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.066362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.066639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.066648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.066973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.066983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.067317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.067328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.067556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.067566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.067776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.067786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.067998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.068007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.068349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.068359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.068562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.068572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.068767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.068776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.069069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.069079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.069402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.069412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.069699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.069708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.070017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.070026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.070339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.070349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.070507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.070519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.070836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.070846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.071153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.071164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.071469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.071480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.071785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.071794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.072119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.072129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.072515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.072524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.072810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.072819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.073128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.073138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.073422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.073432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.073618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.073628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.073949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.073959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.074215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.074225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.074506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.074515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.074828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.074838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.075137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.075147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.075435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.075445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.075739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.075748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.076052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.076066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.076389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.076399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.076600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.076610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.594 [2024-10-14 14:42:47.076938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.594 [2024-10-14 14:42:47.076949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.594 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.077297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.077307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.077616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.077625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.077943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.077953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.078282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.078292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.078589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.078599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.078779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.078789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.079082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.079093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.079387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.079397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.079585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.079596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.079960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.079969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.080258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.080269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.080557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.080567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.080862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.080873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.081148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.081158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.081453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.081462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.081743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.081752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.082110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.082120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.082445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.082455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.082774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.082785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.083103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.083114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.083415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.083425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.083741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.083750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.083949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.083959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.084173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.084185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.084498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.084508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.084698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.084708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.085027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.085037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.085271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.085281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.085599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.085608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.085797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.085806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.086100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.086111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.086401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.086411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.086711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.086721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.087030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.087041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.087366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.087377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.087680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.087690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.087856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.087868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.088153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.088164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.088582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.088592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.088927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.595 [2024-10-14 14:42:47.088936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.595 qpair failed and we were unable to recover it. 00:29:06.595 [2024-10-14 14:42:47.089123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.089134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.089472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.089482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.089799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.089810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.090011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.090021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.090303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.090314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.090623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.090633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.090839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.090850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.091144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.091154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.091449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.091459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.091747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.091757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.092038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.092048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.092362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.092372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.092681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.092691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.092887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.092905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.093232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.093242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.093523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.093533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.093877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.093888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.094184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.094195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.094501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.094512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.094703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.094714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.095031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.095041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.095348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.095359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.095672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.095685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.095875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.095887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.096186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.096199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.096515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.096524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.096825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.096836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.097003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.097014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.097191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.097203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.097523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.097533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.097839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.097848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.098218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.098229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.098513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.098524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.098847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.098858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.099162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.099172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.099516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.099526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.099884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.099895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.100190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.100200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.100518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.100528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.100815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.100825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.596 [2024-10-14 14:42:47.101173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.596 [2024-10-14 14:42:47.101184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.596 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.101548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.101558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.101951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.101961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.102259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.102269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.102558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.102569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.102878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.102888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.103166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.103176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.103600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.103610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.103896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.103906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.104200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.104211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.104492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.104502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.104786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.104799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.105110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.105121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.105450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.105460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.105772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.105782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.106113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.106123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.106420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.106430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.106772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.106783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.107103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.107113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.107426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.107436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.107627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.107636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.107993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.108003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.108282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.108292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.108587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.108598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.108924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.108934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.109217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.109227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.109539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.109549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.109853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.109863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.110081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.110092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.110418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.110427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.110738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.110748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.111035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.111044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.111346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.111358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.111664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.111675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.111942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.111953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.112306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.597 [2024-10-14 14:42:47.112316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.597 qpair failed and we were unable to recover it. 00:29:06.597 [2024-10-14 14:42:47.112620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.112630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.112944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.112954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.113240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.113253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.113540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.113551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.113860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.113871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.114176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.114186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.114499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.114510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.114849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.114860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.115085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.115096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.115427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.115437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.115625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.115635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.115971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.115981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.116286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.116296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.116474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.116484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.116843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.116853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.117030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.117041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.117346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.117362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.117676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.117686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.117865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.117875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.118207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.118220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.118521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.118532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.118853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.118864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.119146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.119157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.119482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.119492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.119801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.119810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.120128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.120138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.120460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.120470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.120760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.120770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.121077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.121088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.121392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.121402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.121716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.121726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.122006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.122015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.122315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.122325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.122636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.122647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.122954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.122964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.123248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.123259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.123572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.123582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.123865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.123875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.124208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.124219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.124498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.124507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.124818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.598 [2024-10-14 14:42:47.124828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.598 qpair failed and we were unable to recover it. 00:29:06.598 [2024-10-14 14:42:47.125135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.125146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.125471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.125483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.125768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.125780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.126084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.126095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.126402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.126412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.126717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.126727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.127013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.127023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.127231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.127242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.127557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.127567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.127861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.127871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.128110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.128122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.128281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.128291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.128480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.128490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.128851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.128861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.129213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.129224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.129519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.129529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.129800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.129812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.129966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.129978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.130335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.130346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.130654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.130664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.130974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.130984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.131303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.131313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.131641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.131651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.132005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.132015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.132229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.132240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.132535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.132545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.132849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.132859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.133157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.133167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.133456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.133466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.133775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.133785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.134075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.134085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.134398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.134408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.134719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.134729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.135119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.135129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.135491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.135501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.135800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.135811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.136123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.136134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.136421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.136431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.136821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.136831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.137135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.137147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.137469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.599 [2024-10-14 14:42:47.137480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.599 qpair failed and we were unable to recover it. 00:29:06.599 [2024-10-14 14:42:47.137769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.137779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.138085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.138095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.138406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.138418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.138714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.138724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.139047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.139057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.139374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.139384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.139717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.139728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.140053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.140068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.140439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.140449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.140735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.140744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.141029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.141039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.141223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.141233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.141403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.141414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.141725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.141736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.142041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.142051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.142330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.142341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.142664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.142675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.142983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.142994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.143298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.143309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.143646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.143655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.143944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.143954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.144191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.144200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.144507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.144517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.144824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.144834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.145146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.145157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.145470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.145480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.145781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.145791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.146100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.146110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.146406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.146416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.146738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.146747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.147043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.147053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.147358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.147369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.147693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.147702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.147987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.147997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.148295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.148306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.148618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.148628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.148934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.148944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.149283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.149294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.149590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.149601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.149932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.149943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.150130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.600 [2024-10-14 14:42:47.150140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.600 qpair failed and we were unable to recover it. 00:29:06.600 [2024-10-14 14:42:47.150486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.150496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.150784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.150793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.151108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.151119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.151448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.151457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.151752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.151761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.152061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.152074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.152277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.152287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.152487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.152497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.152816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.152826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.153119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.153130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.153401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.153411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.153715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.153724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.154042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.154052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.154335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.154345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.154622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.154632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.154934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.154944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.155279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.155289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.155589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.155599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.155891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.155900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.156203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.156213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.156529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.156539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.156827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.156836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.157144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.157154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.157463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.157473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.157781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.157791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.158081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.158092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.158408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.158417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.158807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.158817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.159104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.159115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.159419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.159430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.159744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.159754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.160065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.160075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.160392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.160401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.160559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.160571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.160883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.160893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.161174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.161184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.601 [2024-10-14 14:42:47.161502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.601 [2024-10-14 14:42:47.161512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.601 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.161809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.161818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.162128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.162139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.162451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.162461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.162660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.162669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.163004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.163013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.163189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.163200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.163364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.163374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.163628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.163637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.163834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.163844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.164171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.164181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.164515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.164524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.164805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.164815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.165101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.165111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.165434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.165444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.165727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.165737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.166123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.166134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.166427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.166437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.166744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.166754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.167059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.167074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.167447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.167458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.167755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.167765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.168084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.168094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.168371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.168380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.168687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.168696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.168981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.168991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.169276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.169286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.169626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.169635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.169919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.169928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.170091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.170103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.170229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.170240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.170534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.170544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.170848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.170859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.171196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.171206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.171390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.171404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.171706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.171716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.172007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.172017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.172385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.172396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.172681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.172691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.172900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.172909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.173173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.173184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.602 qpair failed and we were unable to recover it. 00:29:06.602 [2024-10-14 14:42:47.173480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.602 [2024-10-14 14:42:47.173490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.173669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.173680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.174035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.174045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.174326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.174337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.174617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.174627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.174936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.174948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.175234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.175245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.175549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.175559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.175892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.175901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.176210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.176220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.176510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.176519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.176826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.176836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.177222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.177232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.177419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.177429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.177709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.177719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.178025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.178035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.178331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.178342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.178684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.178695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.178997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.179007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.179319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.179329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.179705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.179717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.180022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.180032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.180320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.180330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.180647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.180657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.180972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.180982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.181287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.181299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.181613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.181625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.181879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.181890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.182060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.182076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.182377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.182387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.182688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.182698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.182880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.182891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.183216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.183226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.183530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.183540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.183858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.183869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.184108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.184118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.184412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.184422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.184733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.184743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.185031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.185041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.185352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.185365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.185544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.185554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.603 qpair failed and we were unable to recover it. 00:29:06.603 [2024-10-14 14:42:47.185858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.603 [2024-10-14 14:42:47.185872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.186196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.186207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.186523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.186534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.186832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.186841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.187122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.187132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.187434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.187444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.187727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.187736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.188035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.188045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.188369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.188379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.188669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.188680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.188989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.189000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.189310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.189321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.189683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.189693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.189853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.189864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.190231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.190241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.190402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.190413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.190623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.190634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.190828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.190839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.191134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.191144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.191485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.191495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.191666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.191682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.191960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.191971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.192350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.192361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.192564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.192575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.192885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.192896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.193205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.193215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.193418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.193429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.193747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.193756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.194152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.194162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.194449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.194459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.194763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.194773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.195072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.195082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.195273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.195283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.195625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.195635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.196030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.196041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.196339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.196350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.196733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.196744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.197043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.197054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.197369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.197380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.197666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.197676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.197987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.197997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.604 qpair failed and we were unable to recover it. 00:29:06.604 [2024-10-14 14:42:47.198345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.604 [2024-10-14 14:42:47.198355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.198687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.198697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.198998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.199008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.199322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.199332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.199635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.199645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.199928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.199938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.200233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.200246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.200520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.200531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.200846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.200856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.201149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.201159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.201462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.201472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.201788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.201798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.202124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.202135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.202448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.202458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.202734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.202743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.203061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.203074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.203270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.203281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.203641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.203651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.203948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.203959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.204250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.204260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.204549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.204559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.204876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.204886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.205180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.205190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.205502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.205512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.205792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.205802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.206106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.206118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.206427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.206438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.206823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.206834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.207148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.207159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.207469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.207479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.207813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.207823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.208114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.208124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.208422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.208432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.208713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.208723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.209031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.209041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.605 [2024-10-14 14:42:47.209411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.605 [2024-10-14 14:42:47.209422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.605 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.209726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.209737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.210059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.210075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.210462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.210473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.210757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.210767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.210985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.210995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.211278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.211289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.211586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.211596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.211898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.211908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.212131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.212141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.212431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.212441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.212743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.212754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.213069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.213081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.213413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.213424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.213811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.213822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.214115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.214125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.214475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.214486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.214860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.214869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.215200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.215211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.215530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.215540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.215818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.215828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.216128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.216138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.216418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.216428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.216719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.216730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.217041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.217051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.217374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.217384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.217670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.217680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.217968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.217977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.218271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.218281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.218593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.218603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.218999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.219009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.219347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.219358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.219664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.219675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.219977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.219988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.220296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.220307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.220608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.220618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.220926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.220935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.221236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.221246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.606 qpair failed and we were unable to recover it. 00:29:06.606 [2024-10-14 14:42:47.221422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.606 [2024-10-14 14:42:47.221431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.221682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.221693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.222006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.222017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.222322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.222332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.222496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.222507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.222794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.222804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.223103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.223114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.223413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.223422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.223706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.223715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.223905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.223916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.224265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.224275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.224468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.224477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.224776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.224786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.225073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.225084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.225374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.225384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.225677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.225688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.225978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.225988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.226276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.226286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.226596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.226606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.226918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.226928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.227199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.227210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.227501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.227512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.227823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.227833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.228140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.228150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.228466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.228476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.228764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.228773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.229077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.229087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.229384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.229394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.229693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.229704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.230011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.230021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.230293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.230303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.230576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.230587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.230764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.230775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.231122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.231132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.231439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.231449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.231644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.231654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.231967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.231976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.232279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.232290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.232600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.232610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.232895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.232905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.233244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.233254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.607 qpair failed and we were unable to recover it. 00:29:06.607 [2024-10-14 14:42:47.233542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.607 [2024-10-14 14:42:47.233552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.233864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.233875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.234169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.234181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.234490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.234501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.234799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.234810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.235112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.235122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.235431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.235441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.235754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.235765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.236048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.236058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.236371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.236381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.236546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.236557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.236875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.236885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.237170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.237180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.237489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.237499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.237782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.237793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.238103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.238115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.238422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.238432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.238729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.238739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.239048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.239057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.239368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.239379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.239670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.239680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.239989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.239999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.240314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.240325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.240620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.240630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.240907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.240917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.241274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.241285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.241613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.241624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.241927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.241938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.242231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.242241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.242523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.242533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.242891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.242901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.243106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.243116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.243414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.243424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.243721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.243730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.244048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.244058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.244370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.244380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.244671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.244680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.244986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.244996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.245270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.245281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.245612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.245622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.245906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.608 [2024-10-14 14:42:47.245916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.608 qpair failed and we were unable to recover it. 00:29:06.608 [2024-10-14 14:42:47.246127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.246138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.246417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.246428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.246597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.246608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.246921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.246931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.247095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.247106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.247399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.247409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.247741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.247750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.248123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.248134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.248417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.248428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.248642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.248652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.248968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.248979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.249180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.249190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.249504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.249514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.249819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.249830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.250015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.250025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.250326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.250336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.250719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.250729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.251019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.251030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.251327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.251337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.251634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.251645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.252004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.252014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.252316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.252327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.252528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.252538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.252869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.252880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.253184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.253195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.253490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.253500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.253805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.253814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.254099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.254109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.254416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.254430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.254748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.254758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.255080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.255092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.255405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.255414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.255717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.255727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.256036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.256046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.256357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.256367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.256743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.256753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.257041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.257051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.257417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.257428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.257733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.257742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.258016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.258027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.258333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.609 [2024-10-14 14:42:47.258344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.609 qpair failed and we were unable to recover it. 00:29:06.609 [2024-10-14 14:42:47.258599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.258609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.258889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.258899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.259179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.259189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.259461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.259471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.259788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.259798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.260089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.260099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.260400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.260411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.260720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.260731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.261040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.261051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.261399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.261410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.261793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.261802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.262084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.262095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.262410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.262420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.262696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.262705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.262990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.263000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.263312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.263322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.263633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.263644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.263958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.263968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.264280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.264291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.264596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.264606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.264880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.264890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.265198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.265209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.265494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.265504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.265827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.265837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.266152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.266163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.266478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.266488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.266790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.266801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.267124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.267135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.267461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.267474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.267787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.267796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.267986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.267996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.268322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.268332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.268610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.268619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.268931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.268941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.269250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.269260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.269544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.269554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.610 [2024-10-14 14:42:47.269880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.610 [2024-10-14 14:42:47.269889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.610 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.270201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.270211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.270497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.270508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.270810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.270820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.271103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.271113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.271409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.271420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.271791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.271801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.272120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.272130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.272439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.272448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.272754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.272764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.273067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.273078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.273381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.273392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.273696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.273706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.273910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.273920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.274210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.274220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.274534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.274543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.274769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.274779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.274981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.274991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.275351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.275362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.275750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.275762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.276008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.276018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.276231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.276241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.276557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.276567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.276929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.276940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.277231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.277241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.277555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.277565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.277760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.277778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.278096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.278106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.278396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.278405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.278707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.278717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.279010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.279020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.279313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.279324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.279718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.279727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.280018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.280028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.280403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.280414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.280579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.280589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.280937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.280947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.281215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.281226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.281537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.281547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.281753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.281762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.281948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.611 [2024-10-14 14:42:47.281959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.611 qpair failed and we were unable to recover it. 00:29:06.611 [2024-10-14 14:42:47.282263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.282275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.282579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.282590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.282901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.282911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.283211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.283221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.283536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.283545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.283732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.283743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.284028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.284038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.284343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.284353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.284566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.284576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.284900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.284910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.285219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.285229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.285534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.285544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.285756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.285766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.286098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.286109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.286339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.286349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.286665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.286674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.286987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.286997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.287291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.287301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.287605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.287615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.287899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.287912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.288296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.288307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.288603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.288613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.288879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.288889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.289200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.289211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.289530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.289540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.289833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.289842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.290148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.290159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.290465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.290476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.290762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.290772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.291085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.291095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.291281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.291291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.291625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.291635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.291939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.291949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.292238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.292249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.292570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.292580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.292915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.292926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.293233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.293243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.293533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.293543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.293857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.293868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.294181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.294192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.294504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.612 [2024-10-14 14:42:47.294514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.612 qpair failed and we were unable to recover it. 00:29:06.612 [2024-10-14 14:42:47.294801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.294811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.295127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.295137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.295430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.295440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.295744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.295753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.296037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.296047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.296375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.296389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.296687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.296698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.297000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.297011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.297313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.297323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.297634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.297644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.297937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.297946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.298226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.298237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.298525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.298536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.298718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.298727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.299071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.299082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.299366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.299375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.299631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.299641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.299961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.299971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.300250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.300260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.300556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.300567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.300869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.300879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.301183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.301193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.613 [2024-10-14 14:42:47.301505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.613 [2024-10-14 14:42:47.301514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.613 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.301791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.301802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.302118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.302128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.302456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.302468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.302795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.302805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.303118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.303129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.303453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.303462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.303753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.303763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.304090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.304101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.304392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.304402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.304589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.304599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.304922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.304933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.305219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.305229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.305512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.305522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.305818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.305827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.306134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.306145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.306530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.306540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.306827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.306836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.307118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.307129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.307312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.307323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.890 [2024-10-14 14:42:47.307606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.890 [2024-10-14 14:42:47.307616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.890 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.307863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.307873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.308175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.308185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.308501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.308511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.308819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.308833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.309041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.309051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.309271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.309282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.309614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.309624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.309907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.309918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.310097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.310108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.310421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.310431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.310736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.310746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.311070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.311081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.311397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.311407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.311737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.311747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.312056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.312070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.312441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.312451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.312758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.312768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.313067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.313078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.313381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.313391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.313683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.313692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.314004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.314014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.314300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.314310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.314615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.314625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.314942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.314952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.315233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.315243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.315561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.315571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.315876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.315886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.316177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.316187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.316500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.316509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.316799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.316809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.317114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.317129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.317422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.317433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.317742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.317752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.318034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.318044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.318369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.318380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.318719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.318730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.319110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.319120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.319417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.319427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.319733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.319743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.891 [2024-10-14 14:42:47.320049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.891 [2024-10-14 14:42:47.320058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.891 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.320364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.320374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.320664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.320674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.320982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.320992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.321294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.321305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.321611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.321621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.321930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.321940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.322237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.322247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.322567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.322576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.322862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.322872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.323147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.323158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.323477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.323487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.323817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.323829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.324133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.324143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.324425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.324436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.324596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.324606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.324745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.324755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.325020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.325030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.325249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.325259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.325573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.325583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.325868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.325879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.326074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.326086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.326424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.326434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.326731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.326740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.327023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.327032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.327305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.327316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.327619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.327629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.327921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.327931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.328231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.328241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.328555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.328565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.328745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.328756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.329081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.329092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.329405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.329416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.329621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.329631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.329934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.329943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.330271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.330282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.330568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.330578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.330869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.330878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.331163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.331174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.331473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.331483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.892 [2024-10-14 14:42:47.331674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.892 [2024-10-14 14:42:47.331685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.892 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.331990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.332000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.332295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.332306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.332614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.332624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.332901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.332911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.333268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.333278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.333525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.333535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.333843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.333853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.334133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.334144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.334459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.334469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.334855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.334864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.335151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.335162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.335463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.335473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.335861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.335872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.336219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.336230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.336414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.336426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.336683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.336693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.336996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.337007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.337237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.337247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.337637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.337646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.337941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.337951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.338320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.338331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.338609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.338619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.338923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.338933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.339128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.339139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.339425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.339435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.339749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.339758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.339946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.339956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.340244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.340255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.340558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.340568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.340774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.340784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.341122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.341132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.341436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.341445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.341647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.341658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.341990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.342000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.342183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.342195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.342354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.342365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.342634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.342643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.342852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.342862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.343189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.343200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.893 [2024-10-14 14:42:47.343495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.893 [2024-10-14 14:42:47.343505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.893 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.343818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.343829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.344219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.344229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.344526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.344536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.344838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.344848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.345126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.345136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.345474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.345484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.345779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.345790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.346119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.346130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.346401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.346412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.346716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.346726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.346955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.346965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.347291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.347301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.347617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.347627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.348024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.348034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.348320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.348330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.348617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.348627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.348945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.348955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.349316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.349327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.349634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.349644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.349856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.349868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.350144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.350155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.350521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.350531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.350829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.350840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.351126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.351137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.351416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.351425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.351728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.351738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.352051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.352065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.352372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.352383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.352689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.352700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.353029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.353040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.353348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.353358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.353667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.353677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.894 [2024-10-14 14:42:47.353989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.894 [2024-10-14 14:42:47.354000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.894 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.354304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.354314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.354616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.354626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.354911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.354921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.355232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.355243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.355427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.355437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.355771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.355780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.356070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.356080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.356398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.356407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.356693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.356704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.357011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.357022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.357316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.357327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.357634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.357644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.357930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.357940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.358240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.358250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.358569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.358580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.358891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.358901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.359191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.359202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.359491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.359501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.359837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.359847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.360155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.360165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.360503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.360515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.360816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.360827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.361139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.361149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.361470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.361479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.361774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.361783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.362105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.362115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.362406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.362416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.362619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.362629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.363004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.363015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.363201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.363211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.363506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.363516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.363827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.363836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.364127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.364137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.364424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.364435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.364608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.364618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.364797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.364809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.364992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.365004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.365172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.365182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.365448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.365458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.895 qpair failed and we were unable to recover it. 00:29:06.895 [2024-10-14 14:42:47.365663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.895 [2024-10-14 14:42:47.365673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.365878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.365889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.366219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.366230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.366469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.366479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.366781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.366790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.367141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.367160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.367556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.367567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.367860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.367871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.368176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.368186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.368588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.368597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.368877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.368887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.369159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.369169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.369556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.369566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.369861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.369871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.370175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.370185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.370474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.370487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.370797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.370808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.371078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.371090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.371428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.371438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.371757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.371767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.372058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.372074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.372413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.372423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.372625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.372636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.372803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.372813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.373028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.373039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.373334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.373344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.373639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.373649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.373928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.373937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.374128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.374139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.374512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.374523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.374829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.374839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.375120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.375131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.375467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.375477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.375770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.375780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.376094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.376104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.376394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.376404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.376701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.376711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.377014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.377024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.896 [2024-10-14 14:42:47.377338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.896 [2024-10-14 14:42:47.377348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.896 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.377639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.377649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.377959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.377968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.378292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.378302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.378619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.378629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.378922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.378932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.379229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.379241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.379530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.379541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.379843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.379853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.380135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.380145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.380426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.380439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.380734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.380744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.381056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.381071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.381382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.381395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.381723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.381734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.382021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.382031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.382318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.382328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.382706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.382716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.383024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.383037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.383250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.383261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.383574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.383584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.383878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.383888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.384201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.384211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.384500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.384510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.384816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.384827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.385156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.385166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.385455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.385465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.385749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.385759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.386066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.386078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.386377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.386386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.386691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.386700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.386990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.386999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.387406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.387417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.387784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.387794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.388106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.388116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.388412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.388422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.388717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.388727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.389046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.389057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.389389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.389401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.389737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.389748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.390050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.897 [2024-10-14 14:42:47.390060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.897 qpair failed and we were unable to recover it. 00:29:06.897 [2024-10-14 14:42:47.390349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.390358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.390663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.390673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.390959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.390969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.391285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.391296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.391586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.391599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.391892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.391903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.392128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.392139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.392440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.392450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.392721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.392731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.393048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.393057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.393345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.393356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.393623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.393633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.393935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.393945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.394270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.394281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.394611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.394621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.394909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.394920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.395251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.395262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.395548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.395558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.395895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.395905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.396180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.396191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.396473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.396483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.396804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.396813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.397123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.397133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.397440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.397450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.397800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.397811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.398096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.398107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.398415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.398425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.398741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.398752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.399038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.399049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.399382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.399394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.399754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.399765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.400049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.400059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.400517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.400527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.400819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.400828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.401076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.401087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.401268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.401277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.401582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.401593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.401807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.401818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.402028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.402040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.402247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.402259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.402415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.898 [2024-10-14 14:42:47.402425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.898 qpair failed and we were unable to recover it. 00:29:06.898 [2024-10-14 14:42:47.402698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.402707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.403035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.403046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.403258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.403268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.403400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.403412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.403739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.403752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.404078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.404089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.404382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.404392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.404676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.404687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.405006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.405017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.405303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.405313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.405622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.405631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.405920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.405931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.406213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.406224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.406594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.406604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.406907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.406917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.407240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.407250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.407589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.407599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.407821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.407831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.408118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.408128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.408467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.408477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.408788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.408799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.409122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.409133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.409336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.409346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.409525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.409535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.409855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.409866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.410151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.410162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.410431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.410440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.410764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.410773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.410950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.410961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.411301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.411311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.411603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.411613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.411923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.411934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.412146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.412163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.412494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.412504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.899 [2024-10-14 14:42:47.412787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.899 [2024-10-14 14:42:47.412797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.899 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.412934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.412944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.413289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.413300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.413580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.413590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.413904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.413914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.414209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.414219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.414402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.414412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.414693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.414704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.415013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.415023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.415304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.415314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.415625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.415635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.415923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.415933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.416122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.416132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.416435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.416446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.416753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.416763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.417084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.417095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.417393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.417403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.417692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.417702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.418006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.418016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.418304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.418315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.418619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.418629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.418935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.418945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.419254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.419263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.419565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.419575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.419908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.419918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.420215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.420226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.420527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.420538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.420828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.420839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.421136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.421147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.421441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.421451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.421760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.421770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.422094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.422104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.422389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.422399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.422722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.422731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.423024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.423034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.423340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.423351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.423656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.423667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.423960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.423970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.424276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.900 [2024-10-14 14:42:47.424289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.900 qpair failed and we were unable to recover it. 00:29:06.900 [2024-10-14 14:42:47.424579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.424589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.424899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.424909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.425223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.425234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.425495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.425504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.425840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.425850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.426131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.426141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.426441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.426451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.426717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.426728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.427048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.427058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.427396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.427407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.427594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.427604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.427877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.427887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.428089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.428099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.428472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.428482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.428781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.428790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.429120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.429131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.429449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.429459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.429738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.429748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.430038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.430048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.430264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.430274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.430582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.430591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.430917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.430927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.431116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.431127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.431433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.431443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.431721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.431731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.432047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.432056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.432354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.432366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.432535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.432545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.432930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.432940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.433231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.433242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.433556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.433566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.433779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.433790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.434141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.434152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.434478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.434488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.434777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.434789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.434923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.434934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.435015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.435024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.435266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.435276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.435463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.435473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.435801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.435811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.901 qpair failed and we were unable to recover it. 00:29:06.901 [2024-10-14 14:42:47.436118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.901 [2024-10-14 14:42:47.436128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.436431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.436441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.436736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.436746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.437057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.437072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.437398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.437408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.437725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.437736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.438031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.438042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.438347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.438358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.438648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.438658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.438923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.438933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.439252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.439263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.439528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.439538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.439837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.439847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.440180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.440190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.440511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.440521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.440832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.440842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.441151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.441162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.441468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.441478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.441648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.441658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.441864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.441874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.442196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.442207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.442517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.442526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.442727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.442737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.443054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.443068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.443459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.443468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.443808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.443818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.444149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.444160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.444347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.444359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.444656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.444665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.444983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.444993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.445196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.445207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.445377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.445387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.445687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.445697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.445982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.445991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.446289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.446299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.446697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.446706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.447015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.447025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.447369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.447382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.447707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.447717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.448002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.448013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.902 [2024-10-14 14:42:47.448316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.902 [2024-10-14 14:42:47.448327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.902 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.448634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.448643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.448831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.448849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.449043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.449054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.449414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.449424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.449736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.449746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.450073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.450083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.450397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.450406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.450713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.450723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.451047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.451057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.451345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.451355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.451663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.451673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.452002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.452012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.452302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.452313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.452486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.452496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.452863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.452873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.453158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.453168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.453479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.453489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.453684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.453693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.454077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.454088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.454398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.454407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.454724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.454733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.454946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.454956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.455282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.455292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.455373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.455383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.455727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.455737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.456021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.456031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.456331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.456342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.456633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.456643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.456817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.456827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.457109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.457119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.457389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.457399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.457713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.457723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.457888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.457898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.458159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.458169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.458505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.903 [2024-10-14 14:42:47.458514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.903 qpair failed and we were unable to recover it. 00:29:06.903 [2024-10-14 14:42:47.458721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.458731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.459049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.459059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.459356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.459366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.459556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.459573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.459892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.459902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.460114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.460125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.460404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.460413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.460601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.460611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.460953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.460963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.461276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.461286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.461599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.461609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.461902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.461913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.462199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.462210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.462500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.462509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.462714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.462723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.463019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.463029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.463385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.463395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.463700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.463710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.464014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.464025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.464305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.464318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.464602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.464612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.464923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.464932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.465229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.465239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.465549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.465559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.465761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.465771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.466097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.466107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.466415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.466425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.466736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.466746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.467016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.467026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.467451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.467461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.467742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.467752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.468033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.468043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.468437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.468448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.468734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.468744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.469050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.469060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.469377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.469387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.469703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.469713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.469985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.469997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.470315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.470325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.470604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.470614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.470923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.904 [2024-10-14 14:42:47.470933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.904 qpair failed and we were unable to recover it. 00:29:06.904 [2024-10-14 14:42:47.471332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.471342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.471628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.471637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.471764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.471773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.472035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.472045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.472222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.472232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.472558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.472569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.472946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.472956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.473122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.473132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.473294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.473303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.473586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.473596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.473904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.473915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.474216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.474226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.474518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.474528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.474827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.474837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.475027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.475038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.475379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.475389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.475670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.475681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.475984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.475995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.476298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.476308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.476613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.476625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.476819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.476829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.477160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.477170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.477372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.477382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.477657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.477666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.477982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.477992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.478288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.478299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.478588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.478598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.478907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.478916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.479308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.479319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.479623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.479633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.479965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.479976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.480298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.480309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.480517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.480527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.480829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.480839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.481125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.481135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.481430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.481440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.481819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.481829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.482135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.482146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.482392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.482402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.482608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.482618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.482940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.905 [2024-10-14 14:42:47.482950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.905 qpair failed and we were unable to recover it. 00:29:06.905 [2024-10-14 14:42:47.483229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.483239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.483532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.483541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.483939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.483949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.484144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.484154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.484318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.484329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.484601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.484612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.484928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.484938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.485226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.485236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.485627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.485637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.485928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.485938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.486228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.486239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.486520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.486530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.486932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.486942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.487230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.487241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.487556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.487566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.487902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.487912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.488215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.488226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.488515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.488525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.488830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.488840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.489145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.489155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.489424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.489434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.489613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.489623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.489866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.489877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.490051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.490067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.490121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.490133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.490427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.490437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.490717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.490727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.490939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.490949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.491241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.491252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.491564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.491574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.491718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.491728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.492005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.492015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.492306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.492316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.492621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.492631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.492804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.492814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.492933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.492942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.493208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.493218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.493497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.493508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.493812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.493822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.906 [2024-10-14 14:42:47.494102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.906 [2024-10-14 14:42:47.494112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.906 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.494294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.494304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.494664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.494674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.494960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.494969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.495300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.495310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.495644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.495654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.495825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.495835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.496008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.496022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.496285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.496295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.496607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.496618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.496932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.496943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.497116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.497126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.497453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.497463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.497773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.497783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.498109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.498119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.498437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.498447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.498781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.498792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3577575 Killed "${NVMF_APP[@]}" "$@" 00:29:06.907 [2024-10-14 14:42:47.499130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.499141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.499474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.499485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 14:42:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:06.907 [2024-10-14 14:42:47.499817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.499828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.500051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.500061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 14:42:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:06.907 14:42:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:06.907 [2024-10-14 14:42:47.500431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.500441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 14:42:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:06.907 [2024-10-14 14:42:47.500747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.500757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 14:42:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.907 [2024-10-14 14:42:47.501043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.501053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.501358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.501368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.501690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.501701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.502014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.502025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.502363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.502374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.502679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.502690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.502974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.502985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.503288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.503299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.503579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.503589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.503878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.503889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.504189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.504199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.504513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.504523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.504851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.907 [2024-10-14 14:42:47.504862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.907 qpair failed and we were unable to recover it. 00:29:06.907 [2024-10-14 14:42:47.505170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.505181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.505492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.505503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.505803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.505814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.505989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.506001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.506188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.506200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.506557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.506567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.506878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.506889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.507209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.507220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.507528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.507539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.507847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.507859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.508173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.508184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 14:42:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=3578479 00:29:06.908 [2024-10-14 14:42:47.508497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.508510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 14:42:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 3578479 00:29:06.908 [2024-10-14 14:42:47.508698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.508710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 14:42:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:06.908 [2024-10-14 14:42:47.509003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.509015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 14:42:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3578479 ']' 00:29:06.908 [2024-10-14 14:42:47.509312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.509325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 14:42:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 14:42:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:06.908 [2024-10-14 14:42:47.509613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.509625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 14:42:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.908 [2024-10-14 14:42:47.509944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.509956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 14:42:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:06.908 [2024-10-14 14:42:47.510168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.510180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 14:42:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:06.908 [2024-10-14 14:42:47.510413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.510425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.510705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.510716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.511022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.511033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.511352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.511363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.511678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.511689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.512018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.512029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.512332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.512344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.512679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.512690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.512876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.512887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.513139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.513151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.513470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.513482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.513805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.513816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.513997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.514009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.908 [2024-10-14 14:42:47.514354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.908 [2024-10-14 14:42:47.514366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.908 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.514681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.514694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.515018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.515029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.515272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.515284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.515587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.515599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.515906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.515919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.516306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.516318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.516628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.516639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.516808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.516819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.516982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.516993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.517330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.517342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.517645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.517656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.517828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.517841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.518165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.518176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.518521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.518534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.518829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.518842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.519190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.519202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.519385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.519395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.519699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.519710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.520017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.520029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.520146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.520156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.520482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.520493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.520657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.520669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.520951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.520962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.521253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.521264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.521584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.521594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.521790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.521801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.522104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.522116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.522581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.522592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.522875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.522886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.523194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.523205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.523473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.523483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.523830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.523841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.524146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.524157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.524469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.524479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.524794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.524804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.524983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.524994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.525313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.525324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.525689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.525700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.909 [2024-10-14 14:42:47.526002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.909 [2024-10-14 14:42:47.526013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.909 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.526313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.526325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.526637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.526648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.526958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.526969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.527313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.527324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.527649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.527659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.527943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.527953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.528145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.528162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.528473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.528484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.528772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.528781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.529053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.529068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.529370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.529380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.529697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.529707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.530040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.530051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.530343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.530355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.530528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.530540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.530689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.530700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.530835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.530846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.531017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.531027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.531329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.531340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.531726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.531736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.532025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.532035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.532320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.532330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.532619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.532629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.532911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.532922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.533112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.533123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.533414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.533425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.533726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.533737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.534039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.534050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.534286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.534297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.534570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.534580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.534881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.534891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.535192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.535203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.535387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.535397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.535758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.535768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.536067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.536078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.536404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.536414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.536713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.536723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.537025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.537035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.537448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.537458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.537769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.537780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.910 [2024-10-14 14:42:47.538106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.910 [2024-10-14 14:42:47.538117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.910 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.538429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.538440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.538764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.538776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.539185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.539196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.539535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.539546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.539857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.539867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.540177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.540187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.540480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.540490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.540784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.540794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.541002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.541012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.541363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.541373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.541685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.541695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.541984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.541995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.542278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.542289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.542599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.542609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.542907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.542917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.543213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.543224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.543544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.543554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.543845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.543855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.544173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.544184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.544485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.544495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.544822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.544834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.545177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.545188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.545512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.545523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.545852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.545862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.546179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.546190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.546487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.546497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.546804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.546813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.547098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.547108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.547439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.547449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.547733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.547743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.547944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.547954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.548288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.548299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.548601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.548612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.548924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.548934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.549091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.549101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.549440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.549451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.549734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.549744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.550054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.550069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.550247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.550259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.911 [2024-10-14 14:42:47.550563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.911 [2024-10-14 14:42:47.550573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.911 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.550975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.550985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.551285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.551296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.551458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.551472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.551808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.551818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.552135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.552145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.552457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.552467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.552777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.552787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.553083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.553094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.553424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.553435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.553736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.553746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.554069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.554080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.554276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.554287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.554609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.554619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.554949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.554960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.555273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.555283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.555584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.555594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.555896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.555906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.556208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.556219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.556531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.556541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.556897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.556907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.557088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.557098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.557479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.557489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.557799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.557809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.558200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.558211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.558518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.558528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.558815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.558826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.559127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.559138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.559433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.559444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.559751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.559762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.560092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.560105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.560427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.560438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.560774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.560784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.561082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.561092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.912 [2024-10-14 14:42:47.561401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.912 [2024-10-14 14:42:47.561411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.912 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.561599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.561610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.561684] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:29:06.913 [2024-10-14 14:42:47.561736] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.913 [2024-10-14 14:42:47.561953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.561965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.562235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.562245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.562546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.562558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.562868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.562879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.563220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.563231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.563537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.563548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.563732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.563743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.564073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.564084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.564227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.564238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.564433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.564444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.564759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.564770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.565072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.565084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.565396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.565407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.565568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.565579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.565898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.565910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.566234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.566245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.566538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.566549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.566869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.566880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.567212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.567224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.567491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.567502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.567876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.567890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.568189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.568201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.568481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.568492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.568792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.568803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.568997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.569008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.569319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.569331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.569509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.569520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.569841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.569852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.570183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.570195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.570505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.570516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.570797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.570808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.570992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.571004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.571075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.571086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.571256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.571266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.571573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.571584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.571884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.571895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.572157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.572168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.913 qpair failed and we were unable to recover it. 00:29:06.913 [2024-10-14 14:42:47.572338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.913 [2024-10-14 14:42:47.572349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.572654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.572665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.572994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.573006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.573319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.573331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.573627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.573638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.573973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.573984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.574280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.574292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.574606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.574618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.574856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.574868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.575171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.575183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.575483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.575495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.575794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.575806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.576105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.576117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.576433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.576444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.576733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.576744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.576908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.576919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.577097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.577108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.577407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.577419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.577586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.577598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.577926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.577938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.578271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.578283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.578586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.578597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.578906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.578917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.579224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.579235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.579532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.579544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.579851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.579863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.580141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.580153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.580458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.580469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.580800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.580812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.581126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.581137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.581330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.581341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.581667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.581678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.581987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.581998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.582275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.582287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.582621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.582632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.582941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.582952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.583270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.583281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.583584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.583594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.583881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.583891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.584205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.584216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.584509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.584519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.914 qpair failed and we were unable to recover it. 00:29:06.914 [2024-10-14 14:42:47.584839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.914 [2024-10-14 14:42:47.584849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.585225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.585236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.585566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.585576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.585904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.585915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.586219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.586230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.586394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.586406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.586790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.586801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.587104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.587116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.587403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.587413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.587726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.587736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.588075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.588088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.588400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.588411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.588733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.588743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.588896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.588906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.589076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.589088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.589268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.589278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.589566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.589577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.589873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.589884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.590200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.590211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.590499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.590509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.590814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.590824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.591139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.591150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.591492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.591502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.591800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.591811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.592123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.592134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.592298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.592308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.592592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.592603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.592941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.592951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.593281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.593293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.593587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.593598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.593898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.593908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.594083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.594093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.594265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.594276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.594607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.594618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.594937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.594949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.595296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.595307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.595628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.595639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.595934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.595944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.596169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.596180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.596483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.596493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.596621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.915 [2024-10-14 14:42:47.596630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.915 qpair failed and we were unable to recover it. 00:29:06.915 [2024-10-14 14:42:47.597125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.597214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe638000b90 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.597611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.597649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe638000b90 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.598075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.598107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe638000b90 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.598404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.598416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.598730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.598740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.599072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.599085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.599394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.599404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.599806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.599816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.600122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.600133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.600457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.600468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.600756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.600769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.601077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.601088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.601279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.601289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.601595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.601605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.601895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.601906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.602228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.602238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.602533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.602543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.602862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.602872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.603166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.603176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.603530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.603540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.603854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.603865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.604185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.604197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.604518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.604529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:06.916 [2024-10-14 14:42:47.604824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.916 [2024-10-14 14:42:47.604835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:06.916 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-14 14:42:47.605138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-14 14:42:47.605149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.195 qpair failed and we were unable to recover it. 00:29:07.195 [2024-10-14 14:42:47.605343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.195 [2024-10-14 14:42:47.605353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.605707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.605717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.606015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.606025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.606342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.606353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.606522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.606532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.606912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.606922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.607205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.607216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.607508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.607519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.607772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.607783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.608096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.608108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.608430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.608441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.608734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.608745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.609028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.609041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.609394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.609405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.609631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.609642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.609915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.609927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.610141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.610151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.610339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.610349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.610670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.610681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.610897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.610907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.611082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.611093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.611180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.611190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.611456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.611466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.611792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.611802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.611984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.611995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.612338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.612349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.612639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.612650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.612960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.612970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.613280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.613291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.613599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.613609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.613900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.613910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.614217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.614227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.614513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.614524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.614840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.614851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.615135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.615146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.615471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.615482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.615777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.615787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.616103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.616114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.616420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.196 [2024-10-14 14:42:47.616430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.196 qpair failed and we were unable to recover it. 00:29:07.196 [2024-10-14 14:42:47.616823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.616833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.617129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.617140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.617429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.617439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.617712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.617723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.618008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.618019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.618310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.618321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.618632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.618642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.618963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.618973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.619298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.619309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.619638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.619648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.619932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.619942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.620228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.620239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.620554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.620564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.620847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.620857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.621165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.621178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.621471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.621482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.621826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.621836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.622132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.622143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.622420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.622431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.622598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.622609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.622893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.622903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.623191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.623202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.623424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.623435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.623757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.623767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.624061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.624077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.624265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.624277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.624497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.624506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.624800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.624810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.625017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.625027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.625342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.625353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.625644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.625654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.625847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.625858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.626205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.626216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.626598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.626608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.626916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.626927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.627130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.627140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.627463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.627473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.627684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.627694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.628016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.628025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.197 [2024-10-14 14:42:47.628320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.197 [2024-10-14 14:42:47.628330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.197 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.628635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.628645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.628931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.628943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.629241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.629252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.629577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.629587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.629879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.629889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.630202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.630212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.630526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.630537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.630830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.630841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.631148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.631159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.631440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.631451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.631756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.631766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.632066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.632077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.632386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.632396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.632680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.632690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.632891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.632902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.632924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dc0f0 (9): Bad file descriptor 00:29:07.198 [2024-10-14 14:42:47.633558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.633648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe638000b90 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Write completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Write completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Write completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Write completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Write completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Write completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Write completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Write completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Write completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Write completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 Read completed with error (sct=0, sc=8) 00:29:07.198 starting I/O failed 00:29:07.198 [2024-10-14 14:42:47.633892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.198 [2024-10-14 14:42:47.634323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.634356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.634661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.634670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.634988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.634996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.635304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.635313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.635649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.635656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.635831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.635839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.636098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.636106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.636376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.636384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.636694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.636702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.636916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.198 [2024-10-14 14:42:47.636924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.198 qpair failed and we were unable to recover it. 00:29:07.198 [2024-10-14 14:42:47.637250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.637258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.637465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.637473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.637737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.637744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.637916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.637924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.638303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.638310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.638481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.638490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.638767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.638775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.638832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.638839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.639029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.639039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.639341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.639349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.639672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.639679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.640010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.640017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.640200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.640208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.640573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.640581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.640875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.640882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.641209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.641217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.641597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.641605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.641766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.641774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.642087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.642095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.642274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.642282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.642608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.642616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.642806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.642813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.643043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.643051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.643261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.643270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.643663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.643672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.643981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.643988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.644284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.644292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.644639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.644645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.644958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.644965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.645281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.645290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.645622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.645630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.645807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.645815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.646007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.646015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.646327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.646335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.646376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.646383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.646569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.646576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.646939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.646946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.647117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.647126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.647371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.647378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.647703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.647711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.199 [2024-10-14 14:42:47.648066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.199 [2024-10-14 14:42:47.648073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.199 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.648282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.648289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.648460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.648466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.648629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.648637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.648864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.648871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.649195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.649203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.649459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.649468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.649805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.649813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.650114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.650124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.650304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.650311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.650542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.200 [2024-10-14 14:42:47.650624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.650631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.650963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.650970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.651191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.651199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.651518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.651525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.651845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.651853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.652135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.652143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.652461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.652468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.652765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.652772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.653079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.653086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.653392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.653399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.653711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.653718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.654024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.654032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.654339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.654348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.654656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.654665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.654976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.654984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.655182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.655190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.655511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.655518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.655856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.655863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.656162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.656170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.656481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.656489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.656784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.656793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.657100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.657108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.657320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.657328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.200 [2024-10-14 14:42:47.657629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.200 [2024-10-14 14:42:47.657637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.200 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.657957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.657964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.658278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.658286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.658590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.658598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.658920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.658927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.659222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.659229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.659532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.659539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.659868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.659876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.660185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.660193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.660377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.660385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.660705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.660714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.661000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.661009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.661311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.661319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.661631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.661638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.661839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.661847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.662010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.662019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.662306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.662313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.662633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.662641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.662943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.662951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.663238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.663246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.663413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.663421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.663732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.663739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.663929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.663936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.664203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.664211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.664547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.664553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.664776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.664783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.665093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.665100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.665439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.665446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.665610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.665617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.665900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.665907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.666083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.666090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.666242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.666248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.666536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.666543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.666730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.666738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.667019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.667027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.667314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.667321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.667515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.667523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.667824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.667831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.668092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.201 [2024-10-14 14:42:47.668100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.201 qpair failed and we were unable to recover it. 00:29:07.201 [2024-10-14 14:42:47.668290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.668297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.668619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.668626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.668797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.668804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.669081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.669088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.669275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.669289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.669560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.669567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.669933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.669940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.670235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.670243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.670561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.670568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.670883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.670889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.671216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.671224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.671546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.671555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.671859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.671867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.672180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.672188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.672498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.672506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.672822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.672830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.673156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.673165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.673483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.673491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.673785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.673792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.674107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.674115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.674438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.674444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.674611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.674619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.675008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.675015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.675419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.675426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.675588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.675596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.675782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.675789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.676104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.676112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.676499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.676506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.676694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.676701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.676977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.676984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.677292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.677300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.677591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.677597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.677911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.677918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.678092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.678100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.678398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.202 [2024-10-14 14:42:47.678405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.202 qpair failed and we were unable to recover it. 00:29:07.202 [2024-10-14 14:42:47.678684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.678691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.679000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.679007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.679425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.679434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.679607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.679614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.679898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.679906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.680209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.680217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.680395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.680402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.680788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.680795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.681012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.681020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.681361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.681368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.681667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.681675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.681984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.681991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.682245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.682252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.682559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.682566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.682762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.682778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.682823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.682830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.682974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.682980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.683287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.683294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.683677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.683685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.684027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.684034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.684332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.684339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.684520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.684530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.684807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.684814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.685111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.685119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.685473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.685481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.685790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.685798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.685991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.685999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.686228] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.203 [2024-10-14 14:42:47.686256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.203 [2024-10-14 14:42:47.686263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.203 [2024-10-14 14:42:47.686269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.203 [2024-10-14 14:42:47.686275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.203 [2024-10-14 14:42:47.686315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.686323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.686628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.686635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.686950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.686957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.687276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.687284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.687456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.687463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.687743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.687750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.688097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.688105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.688402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.688410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.688731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.688738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.689068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.689076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.689408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.203 [2024-10-14 14:42:47.689415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.203 qpair failed and we were unable to recover it. 00:29:07.203 [2024-10-14 14:42:47.689730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.689737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.690066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.690074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.690373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.690381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.690701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.690709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.690895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.690903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.691084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:07.204 [2024-10-14 14:42:47.691185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.691192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.691424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:07.204 [2024-10-14 14:42:47.691521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.691533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.691557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:07.204 [2024-10-14 14:42:47.691558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:07.204 [2024-10-14 14:42:47.691856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.691863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.692023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.692030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.692326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.692334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.692647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.692655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.692985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.692992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.693300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.693307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.693525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.693531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.693863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.693870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.694182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.694190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.694530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.694538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.694846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.694854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.695178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.695186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.695535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.695543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.695722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.695730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.696031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.696038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.696356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.696362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.696565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.696572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.696787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.696794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.697153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.697160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.697446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.697455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.697769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.697777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.697931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.697940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.698211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.698219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.698533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.698541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.698856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.698863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.699154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.699161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.699360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.204 [2024-10-14 14:42:47.699370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.204 qpair failed and we were unable to recover it. 00:29:07.204 [2024-10-14 14:42:47.699541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.699548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.699731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.699738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.700049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.700057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.700302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.700309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.700517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.700524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.700688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.700696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.701009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.701016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.701302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.701310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.701481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.701489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.701802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.701809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.701871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.701878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.702087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.702094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.702548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.702555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.702859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.702866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.703179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.703186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.703488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.703496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.703791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.703798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.704187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.704195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.704506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.704514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.704805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.704812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.705109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.705117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.705424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.705431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.705764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.705771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.706170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.706178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.706526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.706533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.706738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.706745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.706918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.706925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.707215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.707222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.707378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.707386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.707561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.707568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.707713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.707721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.707773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.707781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.707950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.707957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.708146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.708153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.708478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.708486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.708865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.708873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.709163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.709170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.709368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.709375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.709737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.709745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.710058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.205 [2024-10-14 14:42:47.710082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.205 qpair failed and we were unable to recover it. 00:29:07.205 [2024-10-14 14:42:47.710376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.710383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.710734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.710741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.711060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.711071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.711399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.711407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.711689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.711698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.712045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.712053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.712356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.712364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.712660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.712667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.712979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.712987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.713314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.713321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.713615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.713622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.713778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.713785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.714071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.714078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.714432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.714439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.714759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.714767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.714926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.714935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.715200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.715209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.715543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.715550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.715720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.715728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.716096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.716104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.716428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.716436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.716749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.716756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.717080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.717089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.717387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.717396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.717715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.717722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.718037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.718044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.718333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.718341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.718524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.718532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.718857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.718866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.719172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.719180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.719483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.719490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.719658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.719666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.719959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.719965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.720177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.720186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.720485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.720492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.720699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.720706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.720969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.720976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.721286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.721294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.721456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.721463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.721615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.721624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.206 [2024-10-14 14:42:47.721826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.206 [2024-10-14 14:42:47.721833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.206 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.722170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.722177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.722480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.722487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.722649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.722656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.722696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.722702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.723023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.723029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.723386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.723394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.723558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.723566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.723874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.723883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.724192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.724200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.724512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.724520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.724839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.724846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.725069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.725076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.725372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.725379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.725707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.725715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.726043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.726051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.726391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.726399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.726675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.726682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.726986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.726994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.727307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.727315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.727622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.727629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.727968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.727975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.728146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.728153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.728441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.728448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.728749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.728755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.729069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.729077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.729292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.729300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.729638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.729645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.729977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.729983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.730172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.730188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.730529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.730537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.730752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.730759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.731098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.731105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.731421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.731428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.731591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.731599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.731891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.731898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.732100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.732109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.207 qpair failed and we were unable to recover it. 00:29:07.207 [2024-10-14 14:42:47.732378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.207 [2024-10-14 14:42:47.732385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.732637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.732644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.732978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.732985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.733289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.733296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.733610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.733617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.733901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.733907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.734108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.734115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.734446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.734453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.734708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.734722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.735023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.735031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.735433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.735441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.735625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.735632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.735891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.735897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.736073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.736081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.736201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.736208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.736526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.736533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.736832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.736839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.737174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.737181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.737506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.737513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.737733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.737748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.737978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.737985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.738215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.738222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.738405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.738412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.738708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.738714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.738907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.738914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.739309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.739316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.739487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.739494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.739772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.739779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.740074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.740081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.740362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.740371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.740573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.740581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.740968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.740975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.741273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.741280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.741569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.741575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.741907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.741913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.742212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.742218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.742523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.742530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.742840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.742850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.743174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.743182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.743488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.743495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.208 [2024-10-14 14:42:47.743853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.208 [2024-10-14 14:42:47.743860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.208 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.744139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.744146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.744460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.744467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.744801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.744809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.745141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.745149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.745443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.745449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.745766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.745774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.746090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.746097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.746399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.746405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.746695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.746702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.747016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.747023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.747341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.747349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.747653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.747660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.747960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.747967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.748380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.748387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.748685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.748691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.749011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.749018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.749342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.749350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.749683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.749691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.750012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.750018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.750153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.750159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.750351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.750358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.750574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.750581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.750869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.750877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.751057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.751067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.751223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.751230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.751462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.751470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.751547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.751553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.751857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.751864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.752174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.752183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.752359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.752366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.752521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.752529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.752708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.752716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.753039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.753046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.753260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.753268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.753567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.209 [2024-10-14 14:42:47.753575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.209 qpair failed and we were unable to recover it. 00:29:07.209 [2024-10-14 14:42:47.753732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.753740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.754092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.754100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.754279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.754286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.754573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.754580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.754881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.754887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.755106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.755113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.755262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.755269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.755644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.755651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.755979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.755986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.756155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.756164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.756326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.756333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.756625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.756632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.756816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.756823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.757173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.757180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.757461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.757468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.757652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.757659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.758011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.758018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.758170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.758178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.758463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.758471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.758796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.758803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.759117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.759124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.759425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.759431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.759715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.759721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.760009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.760016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.760328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.760336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.760664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.760672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.760973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.760981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.761136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.761142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.761395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.761403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.761715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.761723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.762035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.762042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.762359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.762366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.762686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.762693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.762904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.762913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.763100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.763108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.763458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.763465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.763782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.763789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.763955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.763963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.764262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.764269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.764468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.210 [2024-10-14 14:42:47.764476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.210 qpair failed and we were unable to recover it. 00:29:07.210 [2024-10-14 14:42:47.764523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.764531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.764822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.764829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.764999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.765008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.765166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.765174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.765388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.765395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.765755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.765763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.766036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.766043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.766370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.766377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.766590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.766597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.766862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.766869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.767210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.767219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.767406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.767413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.767725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.767733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.768021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.768028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.768342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.768350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.768531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.768539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.768809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.768817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.769090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.769099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.769329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.769337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.769615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.769621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.769968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.769975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.770315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.770322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.770657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.770664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.770959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.770965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.771294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.771301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.771621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.771628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.771942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.771949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.772115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.772123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.772406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.772413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.772781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.772788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.773106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.773113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.773260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.773266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.773451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.773458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.773621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.773630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.773665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.773671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.773906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.773913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.774096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.774104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.774414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.774421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.774760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.774767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.774937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.774944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.211 [2024-10-14 14:42:47.775255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.211 [2024-10-14 14:42:47.775262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.211 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.775570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.775576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.775904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.775911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.776202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.776210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.776583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.776590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.776748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.776755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.776992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.776999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.777173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.777180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.777406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.777413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.777735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.777743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.777920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.777928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.778119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.778127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.778420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.778427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.778648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.778655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.778965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.778972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.779301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.779308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.779481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.779488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.779556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.779564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.779913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.779920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.780235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.780243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.780421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.780428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.780737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.780743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.780910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.780917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.781072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.781079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.781414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.781420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.781585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.781593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.781875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.781882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.782060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.782070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.782371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.782378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.782700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.782707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.783010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.783016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.783313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.783320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.783643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.783649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.784048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.784057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.784357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.784365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.784675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.784683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.784969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.784976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.212 [2024-10-14 14:42:47.785270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.212 [2024-10-14 14:42:47.785278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.212 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.785587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.785594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.785900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.785907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.786226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.786232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.786455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.786464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.786793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.786800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.786960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.786968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.787259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.787266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.787447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.787455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.787737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.787744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.787907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.787914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.788093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.788100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.788277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.788284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.788443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.788450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.788619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.788626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.788902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.788909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.789165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.789172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.789522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.789529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.789832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.789839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.790004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.790011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.790296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.790304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.790548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.790556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.790728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.790736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.791059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.791072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.791363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.791370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.791540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.791547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.791893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.791900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.792201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.792208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.792505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.792512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.792842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.792848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.793156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.793163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.793491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.793499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.793731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.793738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.794070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.794077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.794264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.794271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.213 [2024-10-14 14:42:47.794446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.213 [2024-10-14 14:42:47.794453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.213 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.794723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.794731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.794984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.795000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.795303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.795311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.795600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.795607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.795925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.795932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.796056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.796064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.796341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.796348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.796711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.796718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.796912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.796919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.797075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.797082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.797362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.797369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.797590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.797596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.797630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.797637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.797675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.797681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.797981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.797988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.798152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.798159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.798315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.798321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.798602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.798608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.798867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.798875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.799095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.799102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.799401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.799408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.799691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.799697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.799900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.799907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.800126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.800133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.800308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.800315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.800643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.800650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.800933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.800940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.801101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.801108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.801344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.801351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.801633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.801640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.801821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.801828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.801995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.802001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.802295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.802303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.802597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.802604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.802904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.802912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.803185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.803192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.803447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.803454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.803764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.803771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.803944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.803950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.804223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.214 [2024-10-14 14:42:47.804230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.214 qpair failed and we were unable to recover it. 00:29:07.214 [2024-10-14 14:42:47.804500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.804509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.804818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.804824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.805135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.805142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.805445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.805452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.805776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.805784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.806120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.806127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.806297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.806305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.806558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.806565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.806743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.806750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.806928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.806935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.807234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.807240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.807483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.807489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.807808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.807814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.808055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.808072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.808396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.808404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.808691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.808699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.809013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.809021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.809376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.809383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.809591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.809597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.809811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.809817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.810004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.810010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.810159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.810166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.810541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.810548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.810587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.810593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.810935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.810942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.811179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.811185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.811510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.811517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.811673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.811680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.811714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.811720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.811880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.811888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.812074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.812081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.812297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.812305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.812496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.812504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.812683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.812690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.812830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.812837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.813074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.813081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.813379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.813386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.813704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.813711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.814002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.814010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.814283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.814290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.215 qpair failed and we were unable to recover it. 00:29:07.215 [2024-10-14 14:42:47.814595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.215 [2024-10-14 14:42:47.814606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.814904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.814912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.815211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.815218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.815523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.815530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.815856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.815863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.816178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.816185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.816532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.816539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.816839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.816846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.817164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.817171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.817341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.817348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.817623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.817630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.817888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.817895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.818078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.818085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.818273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.818281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.818564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.818571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.818875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.818882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.819041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.819048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.819334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.819342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.819642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.819649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.819808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.819815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.819982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.819988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.820373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.820380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.820419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.820426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.820747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.820755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.820947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.820953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.821150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.821166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.821453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.821461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.821787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.821795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.822097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.822104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.822398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.822404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.822736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.822743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.822954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.822961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.823272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.823279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.823455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.823463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.823630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.823636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.823924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.823931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.824219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.824226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.824380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.824387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.824672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.824679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.824995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.825001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.216 [2024-10-14 14:42:47.825176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.216 [2024-10-14 14:42:47.825184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.216 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.825430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.825436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.825475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.825481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.825819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.825826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.826141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.826149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.826330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.826337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.826626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.826633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.826828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.826835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.827229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.827236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.827421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.827428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.827723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.827730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.828047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.828053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.828362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.828369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.828540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.828547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.828833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.828840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.829134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.829141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.829457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.829464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.829759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.829767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.829943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.829950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.830135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.830142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.830531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.830540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.830856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.830864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.831177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.831184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.831343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.831349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.831624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.831630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.831932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.831938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.832095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.832102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.832401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.832408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.832714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.832720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.833019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.833026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.833326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.833333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.833664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.833671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.833983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.833990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.834329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.834336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.834644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.834651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.834973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.834981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.835296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.835303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.835631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.835638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.835937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.835944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.836239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.217 [2024-10-14 14:42:47.836246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.217 qpair failed and we were unable to recover it. 00:29:07.217 [2024-10-14 14:42:47.836577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.836586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.836919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.836928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.837227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.837235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.837526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.837533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.837852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.837858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.838043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.838050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.838414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.838421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.838583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.838590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.838932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.838940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.839228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.839235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.839268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.839274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.839313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.839319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.839494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.839510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.839823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.839829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.840000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.840008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.840172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.840179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.840509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.840515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.840794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.840802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.840980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.840987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.841027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.841035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.841250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.841257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.841560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.841567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.841974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.841980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.842144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.842151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.842478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.842485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.842774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.842781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.843131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.843138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.843454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.843461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.843706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.843713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.844038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.844046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.844349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.844356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.844749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.844755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.844912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.844925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.845236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.845243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.845585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.845593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.845881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.845887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.218 [2024-10-14 14:42:47.846175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.218 [2024-10-14 14:42:47.846182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.218 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.846499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.846506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.846720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.846727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.847081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.847088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.847394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.847402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.847720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.847729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.848047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.848054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.848212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.848219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.848375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.848381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.848692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.848699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.849012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.849019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.849346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.849353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.849685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.849692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.850001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.850008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.850179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.850186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.850536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.850543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.850831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.850838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.851147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.851154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.851473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.851481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.851808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.851815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.852012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.852018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.852201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.852208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.852620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.852627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.852935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.852942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.853108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.853115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.853235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.853242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.853422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.853428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.853596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.853603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.853818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.853825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.854050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.854057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.854413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.854421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.854745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.854752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.854932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.854940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.855244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.855251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.855285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.855292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.855440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.855447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.855803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.855809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.856013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.856021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.856436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.856444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.856625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.856632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.856797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.219 [2024-10-14 14:42:47.856803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.219 qpair failed and we were unable to recover it. 00:29:07.219 [2024-10-14 14:42:47.857104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.857111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.857494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.857501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.857678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.857686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.857879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.857887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.858186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.858193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.858474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.858481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.858831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.858838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.859066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.859074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.859397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.859404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.859698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.859705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.860013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.860020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.860340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.860346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.860640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.860648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.861009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.861016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.861109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.861115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.861399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.861406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.861699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.861706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.861992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.861999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.862286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.862293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.862534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.862541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.862880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.862888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.863194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.863203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.863512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.863519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.863841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.863848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.864149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.864156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.864479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.864486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.864663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.864669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.865057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.865066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.865349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.865356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.865675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.865681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.865851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.865858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.866017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.866024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.866253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.866260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.866586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.866593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.866970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.866977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.867278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.867285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.867447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.867455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.867794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.867800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.868115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.868122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.220 [2024-10-14 14:42:47.868299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.220 [2024-10-14 14:42:47.868307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.220 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.868375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.868382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.868595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.868602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.868752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.868759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.868929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.868936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.869250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.869257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.869433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.869440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.869601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.869608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.869784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.869792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.869920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.869927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.870241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.870249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.870439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.870447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.870761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.870768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.871061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.871073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.871357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.871364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.871577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.871584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.871736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.871743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.871780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.871786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.871963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.871970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.872308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.872315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.872670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.872677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.872838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.872845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.873182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.873188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.873497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.873504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.873836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.873842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.874048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.874054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.874285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.874293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.874579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.874586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.874894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.874901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.875225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.875232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.875439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.875446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.875812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.875820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.876097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.876105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.876385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.876392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.876709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.876715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.877045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.877052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.877261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.877277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.877610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.877617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.877738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.877744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.878041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.878048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.221 [2024-10-14 14:42:47.878333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.221 [2024-10-14 14:42:47.878340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.221 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.878545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.878552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.878854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.878861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.879113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.879120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.879273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.879280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.879624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.879632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.879801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.879808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.880000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.880006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.880169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.880176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.880394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.880401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.880712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.880721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.881010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.881017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.881340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.881349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.881669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.881676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.881993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.882001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.882304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.882311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.882465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.882472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.882767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.882775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.883103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.883112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.883346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.883353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.883652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.883659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.883979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.883987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.884195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.884202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.884520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.884527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.884691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.884698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.885079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.885086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.885391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.885399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.885672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.885679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.886010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.886017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.886320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.886328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.886506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.886514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.886686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.886694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.887001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.887008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.887222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.887230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.887507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.887514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.887800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.887808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.222 qpair failed and we were unable to recover it. 00:29:07.222 [2024-10-14 14:42:47.888079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.222 [2024-10-14 14:42:47.888087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.888264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.888270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.888585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.888592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.888909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.888916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.889206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.889215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.889426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.889433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.889512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.889519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.889680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.889687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.889992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.889999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.890321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.890328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.890517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.890524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.890676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.890683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.890963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.890970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.891012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.891018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.891297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.891304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.891479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.891486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.891656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.891663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.891910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.891918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.892108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.892116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.892159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.892166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.892369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.892377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.892673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.892680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.892983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.892990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.893061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.893072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.893235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.893242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.893402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.893410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.893491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.893500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.893734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.893741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.893955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.893962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.894163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.894170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.894358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.894366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.894573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.894580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.894738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.894745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.895025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.895032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.895336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.895343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.895514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.895523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.895679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.895686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.895978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.895986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.896278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.896285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.896650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.896657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.223 [2024-10-14 14:42:47.896822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.223 [2024-10-14 14:42:47.896830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.223 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.897114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.897122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.897421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.897428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.897714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.897721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.897889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.897897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.898178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.898186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.898472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.898479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.898745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.898752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.899077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.899086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.899373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.899381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.899541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.899548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.899703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.899710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.899996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.900003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.900302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.900309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.900611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.900619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.900869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.900884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.901188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.901196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.901506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.901513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.901790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.901797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.902124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.902132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.902458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.902467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.902790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.902797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.903109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.903117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.903439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.903445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.903733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.903740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.904034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.904042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.904349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.904360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.904667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.904675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.904983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.904991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.905301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.905309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.905478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.905485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.905756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.905762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.906065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.906072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.906307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.906315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.906647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.906654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.906970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.906980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.907142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.907150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.907439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.907446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.907484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.907490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.907753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.907760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.224 [2024-10-14 14:42:47.908073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.224 [2024-10-14 14:42:47.908081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.224 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.908410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.908419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.908725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.908733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.908932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.908939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.909219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.909227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.909575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.909583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.909894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.909902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.910141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.910148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.910456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.910464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.910619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.910627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.910897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.910905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.911176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.911183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.911487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.911495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.911808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.911816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.912145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.912153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.912493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.912500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.912782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.912789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.912970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.912977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.913303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.913311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.913632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.913639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.913947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.913954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.914289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.914298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.914586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.914594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.914904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.914911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.915230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.915238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.915537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.915543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.915843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.915850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.916024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.916031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.916400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.916407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.916698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.916705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.916997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.917004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.917326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.917333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.917669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.917676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.918011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.918018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.918273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.498 [2024-10-14 14:42:47.918281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.498 qpair failed and we were unable to recover it. 00:29:07.498 [2024-10-14 14:42:47.918597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.918609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.918897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.918904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.919126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.919133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.919309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.919316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.919503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.919511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.919731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.919738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.919894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.919901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.920085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.920092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.920271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.920278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.920568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.920575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.920793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.920800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.921123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.921130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.921423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.921430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.921613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.921620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.921802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.921808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.922128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.922136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.922303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.922310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.922493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.922501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.922839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.922846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.923019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.923027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.923188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.923195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.923469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.923476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.923658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.923664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.923967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.923975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.924287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.924295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.924452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.924459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.924497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.924505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.924847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.924854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.925151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.925158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.925481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.925488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.925664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.925671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.925973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.925980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.926180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.926188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.926375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.926383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.926649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.926657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.926847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.926854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.927168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.927175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.927530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.927536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.927861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.927869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.928212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.499 [2024-10-14 14:42:47.928219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.499 qpair failed and we were unable to recover it. 00:29:07.499 [2024-10-14 14:42:47.928400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.928415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.928700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.928707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.929013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.929019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.929326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.929332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.929660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.929667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.929966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.929974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.930469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.930475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.930766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.930772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.931075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.931082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.931278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.931285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.931653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.931660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.932002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.932009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.932300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.932307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.932619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.932625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.932945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.932952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.933158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.933165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.933479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.933486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.933642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.933649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.933967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.933974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.934282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.934289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.934457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.934465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.934508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.934516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.934799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.934806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.935121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.935128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.935306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.935313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.935549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.935555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.935712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.935718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.935940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.935947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.936133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.936146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.936504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.936511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.936817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.936823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.937147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.937153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.937469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.937476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.937763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.937771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.937967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.937974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.938342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.938349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.938654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.938660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.938973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.938980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.939305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.939312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.939621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.500 [2024-10-14 14:42:47.939629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.500 qpair failed and we were unable to recover it. 00:29:07.500 [2024-10-14 14:42:47.939921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.939929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.940214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.940221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.940525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.940532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.940859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.940866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.941067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.941074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.941233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.941241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.941415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.941422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.941755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.941763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.941936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.941943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.942142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.942149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.942472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.942479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.942777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.942784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.942987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.942994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.943314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.943321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.943498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.943506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.943639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.943646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.943679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.943685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.944008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.944015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.944327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.944334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.944641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.944648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.944972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.944978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.945285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.945292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.945467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.945474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.945741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.945748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.945919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.945927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.946080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.946087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.946380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.946386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.946570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.946578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.946754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.946761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.946979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.946986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.947169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.947176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.947488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.947494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.947786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.947793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.947958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.947965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.948280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.948287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.948476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.948483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.948676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.948683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.948917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.948924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.949104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.949111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.949424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.949431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.501 qpair failed and we were unable to recover it. 00:29:07.501 [2024-10-14 14:42:47.949749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.501 [2024-10-14 14:42:47.949758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.950075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.950083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.950370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.950376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.950541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.950548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.950876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.950882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.951183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.951190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.951601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.951608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.951912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.951919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.952234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.952241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.952583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.952590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.952900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.952906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.953220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.953227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.953534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.953541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.953858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.953866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.954040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.954047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.954490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.954497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.954786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.954793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.954995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.955002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.955370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.955377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.955543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.955549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.955840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.955846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.956153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.956159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.956457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.956463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.956776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.956784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.957014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.957022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.957333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.957340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.957495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.957502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.957846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.957852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.958140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.958147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.958358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.958365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.958534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.958540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.958734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.958741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.958783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.958789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.959074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.959081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.959119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.502 [2024-10-14 14:42:47.959126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.502 qpair failed and we were unable to recover it. 00:29:07.502 [2024-10-14 14:42:47.959291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.959298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.959466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.959473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.959632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.959639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.959935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.959942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.960277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.960285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.960592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.960600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.960891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.960898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.961197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.961204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.961242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.961248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.961402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.961410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.961755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.961762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.962080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.962087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.962291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.962298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.962342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.962349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.962542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.962548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.962734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.962741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.962981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.962988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.963151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.963158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.963480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.963487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.963809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.963817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.964018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.964025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.964248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.964255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.964564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.964572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.964842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.964850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.965167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.965174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.965474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.965481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.965783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.965790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.966114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.966121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.966454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.966461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.966614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.966620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.966939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.966946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.967223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.967231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.967572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.967579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.967868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.967876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.968182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.968189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.968522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.968529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.968853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.968860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.969187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.969194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.969524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.969531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.969827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.969833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.503 [2024-10-14 14:42:47.970236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.503 [2024-10-14 14:42:47.970243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.503 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.970538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.970545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.970840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.970847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.971192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.971199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.971518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.971526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.971842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.971852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.972190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.972197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.972508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.972514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.972827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.972833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.973011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.973018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.973400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.973407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.973729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.973736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.974047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.974053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.974216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.974224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.974504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.974511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.974669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.974676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.974715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.974722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.975049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.975056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.975333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.975340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.975482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.975489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.975716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.975722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.975910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.975918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.976121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.976128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.976353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.976359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.976541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.976547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.976681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.976688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.976991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.976998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.977031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.977038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.977315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.977323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.977637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.977643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.977953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.977960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.978126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.978132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.978409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.978416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.978568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.978576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.978860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.978867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.979055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.979065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.979300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.979307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.979556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.979563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.979890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.979896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.980198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.980205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.980388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.504 [2024-10-14 14:42:47.980395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.504 qpair failed and we were unable to recover it. 00:29:07.504 [2024-10-14 14:42:47.980683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.980690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.980847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.980854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.981027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.981034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.981353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.981360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.981647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.981656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.981949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.981956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.982263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.982270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.982563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.982570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.982870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.982876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.983044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.983051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.983348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.983355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.983673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.983679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.983958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.983964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.984139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.984146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.984453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.984459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.984701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.984707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.985045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.985051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.985227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.985234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.985574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.985581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.985814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.985821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.986108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.986115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.986428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.986436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.986752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.986760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.987081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.987088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.987375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.987381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.987539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.987546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.987698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.987705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.988041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.988048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.988226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.988233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.988530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.988537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.988856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.988863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.989136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.989143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.989371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.989377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.989664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.989671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.989882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.989888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.990122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.990129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.990409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.990416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.990730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.990737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.991052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.991058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.991454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.505 [2024-10-14 14:42:47.991460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.505 qpair failed and we were unable to recover it. 00:29:07.505 [2024-10-14 14:42:47.991787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.991794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.992098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.992105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.992332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.992338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.992513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.992527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.992700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.992708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.993033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.993040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.993316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.993323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.993509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.993516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.993789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.993796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.993998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.994006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.994190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.994197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.994495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.994501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.994810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.994817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.995083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.995096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.995299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.995306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.995629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.995636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.995883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.995890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.996261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.996269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.996629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.996637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.996929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.996936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.997110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.997117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.997494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.997501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.997787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.997795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.997868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.997876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.998195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.998202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.998474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.998480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.998689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.998696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.999046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.999053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.999121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.999128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.999307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.999314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.999483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.999489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.999659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.999666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:47.999958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:47.999965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:48.000299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:48.000306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:48.000607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:48.000614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.506 qpair failed and we were unable to recover it. 00:29:07.506 [2024-10-14 14:42:48.000890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.506 [2024-10-14 14:42:48.000897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.001211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.001218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.001437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.001445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.001656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.001662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.001970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.001977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.002261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.002268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.002591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.002598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.002778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.002785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.003120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.003127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.003425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.003431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.003721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.003728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.003947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.003954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.004368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.004375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.004575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.004583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.004742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.004750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.004984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.004991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.005300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.005307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.005624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.005630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.005932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.005938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.006221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.006228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.006585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.006592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.006895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.006901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.007227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.007235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.007561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.007569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.007748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.007755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.008056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.008070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.008371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.008378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.008557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.008564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.008885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.008892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.009179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.009186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.009295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.009301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.009485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.009492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.009657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.009663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.009839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.009845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.010089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.010096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.010251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.010257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.010385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.010393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.010688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.010695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.010986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.010992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.011290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.011297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.507 [2024-10-14 14:42:48.011628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.507 [2024-10-14 14:42:48.011635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.507 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.011946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.011954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.012274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.012281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.012450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.012457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.012741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.012748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.012924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.012930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.013106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.013113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.013290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.013297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.013498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.013505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.013794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.013801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.014142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.014149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.014318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.014325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.014544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.014550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.014664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.014671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.014953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.014959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.015141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.015148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.015492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.015498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.015798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.015805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.016112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.016119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.016413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.016420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.016737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.016744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.017081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.017088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.017422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.017428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.017755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.017762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.018071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.018078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.018400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.018408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.018722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.018730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.018919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.018927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.019235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.019242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.019569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.019575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.019747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.019755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.020056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.020067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.020386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.020393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.020578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.020585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.020972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.020978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.021395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.021402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.021701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.021709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.022009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.022015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.022204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.022212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.022511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.022518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.022710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.508 [2024-10-14 14:42:48.022724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.508 qpair failed and we were unable to recover it. 00:29:07.508 [2024-10-14 14:42:48.023085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.023092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.023402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.023409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.023713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.023720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.024050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.024057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.024489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.024496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.024802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.024809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.025116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.025123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.025310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.025317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.025560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.025567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.025740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.025748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.025874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.025880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.026054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.026060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.026223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.026230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.026442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.026450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.026671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.026677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.026831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.026838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.026988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.026994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.027173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.027180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.027404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.027411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.027566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.027573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.027615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.027621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.027662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.027670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.027709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.027715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.027853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.027859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.027897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.027903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.028027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.028033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.028242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.028249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.028631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.028638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.028908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.028915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.029221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.029228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.029395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.029401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.029594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.029600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.029781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.029788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.030167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.030174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.030422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.030429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.030723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.030732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.030942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.030949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.031148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.031155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.031506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.031513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.031715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.031723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.509 qpair failed and we were unable to recover it. 00:29:07.509 [2024-10-14 14:42:48.032094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.509 [2024-10-14 14:42:48.032101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.032428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.032435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.032782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.032789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.033095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.033102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.033433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.033439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.033782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.033789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.033964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.033971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.034223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.034230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.034518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.034526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.034716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.034724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.035068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.035076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.035366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.035372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.035690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.035697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.035987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.035994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.036308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.036315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.036642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.036649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.036963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.036970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.037286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.037293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.037612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.037619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.037939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.037947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.038305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.038312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.038482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.038489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.038673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.038680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.038972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.038979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.039143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.039150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.039360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.039366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.039729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.039735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.039899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.039905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.040256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.040262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.040518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.040524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.040744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.040751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.041078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.041085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.041268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.041275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.041373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.041380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.041654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.041661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.041694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.041701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.041839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.041846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.042159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.042166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.042368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.042375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.042544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.510 [2024-10-14 14:42:48.042550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.510 qpair failed and we were unable to recover it. 00:29:07.510 [2024-10-14 14:42:48.042830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.042836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.043133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.043140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.043325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.043332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.043620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.043627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.043827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.043834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.044162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.044169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.044453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.044459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.044645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.044652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.045058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.045067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.045400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.045407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.045791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.045798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.046072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.046079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.046250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.046257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.046558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.046564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.046857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.046865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.047056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.047065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.047247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.047255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.047502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.047509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.047837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.047845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.048121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.048128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.048443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.048450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.048604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.048611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.048971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.048978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.049282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.049289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.049575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.049582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.049900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.049906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.050231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.050238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.050573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.050580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.050986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.050993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.051305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.051312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.051613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.051619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.051894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.051901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.052207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.052214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.052407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.052415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.052579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.052586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.052662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.052669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.052840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.052847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.511 qpair failed and we were unable to recover it. 00:29:07.511 [2024-10-14 14:42:48.053026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.511 [2024-10-14 14:42:48.053032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.053433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.053440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.053615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.053623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.053943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.053950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.054241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.054248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.054646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.054653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.054943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.054949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.055356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.055363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.055648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.055656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.055953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.055961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.056337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.056344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.056630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.056637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.056792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.056799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.057177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.057184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.057543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.057550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.057847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.057853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.058066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.058074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.058395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.058402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.058732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.058739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.059024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.059030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.059330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.059338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.059680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.059687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.059854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.059862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.060010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.060017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.060351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.060358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.060535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.060543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.060737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.060744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.061067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.061074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.061374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.061380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.061678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.061684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.062018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.062025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.062243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.062250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.062402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.062409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.062497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.062504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.062625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.062631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.062817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.062824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.062978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.062985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.063274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.063281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.063602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.063610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.063925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.063932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.512 [2024-10-14 14:42:48.064276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.512 [2024-10-14 14:42:48.064283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.512 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.064582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.064588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.064911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.064918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.065217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.065224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.065556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.065563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.065896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.065903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.066203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.066211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.066524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.066532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.066685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.066693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.066971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.066979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.067308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.067315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.067613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.067620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.067914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.067921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.068223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.068230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.068445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.068452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.068729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.068735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.069078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.069084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.069396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.069403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.069559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.069567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.069891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.069899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.070212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.070220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.070542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.070550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.070854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.070861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.071145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.071152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.071322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.071330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.071610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.071617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.071918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.071924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.072225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.072232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.072561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.072567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.072849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.072856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.073116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.073123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.073495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.073502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.073834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.073841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.074137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.074144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.074328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.074335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.074504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.074511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.074691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.074698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.075024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.075031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.075072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.075080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.075261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.075267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.075437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.513 [2024-10-14 14:42:48.075444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.513 qpair failed and we were unable to recover it. 00:29:07.513 [2024-10-14 14:42:48.075628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.075635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.075957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.075964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.076168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.076175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.076496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.076502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.076819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.076826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.077127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.077134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.077299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.077306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.077614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.077621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.077952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.077959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.078206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.078213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.078425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.078431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.078595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.078602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.078755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.078762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.078946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.078953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.079104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.079112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.079272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.079279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.079488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.079495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.079766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.079773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.079980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.079987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.080347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.080354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.080519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.080527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.080832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.080839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.081013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.081026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.081325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.081332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.081497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.081512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.081798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.081805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.082089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.082096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.082402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.082409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.082728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.082734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.083042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.083048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.083232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.083238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.083412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.083419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.083636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.083643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.083822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.083829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.084121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.084129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.084468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.084475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.084727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.084734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.084917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.084925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.085175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.085182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.085459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.085465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.514 qpair failed and we were unable to recover it. 00:29:07.514 [2024-10-14 14:42:48.085778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.514 [2024-10-14 14:42:48.085784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.086184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.086191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.086348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.086355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.086631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.086637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.086984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.086991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.087194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.087201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.087376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.087383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.087577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.087585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.087930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.087936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.088127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.088134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.088333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.088340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.088549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.088556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.088861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.088868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.089169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.089176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.089444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.089451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.089764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.089771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.090092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.090099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.090431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.090438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.090768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.090776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.091111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.091118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.091399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.091405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.091623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.091629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.092005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.092012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.092177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.092184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.092343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.092350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.092496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.092503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.092658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.092665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.092820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.092827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.093003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.093010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.093187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.093194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.093320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.093326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.093511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.093518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.093695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.093704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.094018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.094026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.094196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.094202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.094588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.094595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.515 [2024-10-14 14:42:48.094975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.515 [2024-10-14 14:42:48.094981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.515 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.095264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.095272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.095573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.095580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.095897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.095903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.096213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.096220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.096554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.096561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.096898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.096905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.097223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.097230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.097524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.097531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.097902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.097908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.098314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.098322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.098623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.098630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.098830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.098837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.099104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.099111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.099423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.099430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.099742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.099749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.100037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.100044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.100345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.100353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.100648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.100655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.100966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.100972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.101275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.101283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.101647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.101655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.101836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.101843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.102147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.102153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.102339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.102346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.102719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.102725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.103019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.103025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.103348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.103355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.103633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.103640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.104032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.104039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.104259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.104267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.104430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.104436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.104735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.104742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.105037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.105043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.105084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.105091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.105236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.105243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.105281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.105288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.105505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.105512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.105545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.105552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.105889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.105895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.106057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.516 [2024-10-14 14:42:48.106068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.516 qpair failed and we were unable to recover it. 00:29:07.516 [2024-10-14 14:42:48.106105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.106114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.106433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.106440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.106769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.106775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.106942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.106950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.107115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.107122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.107314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.107321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.107611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.107619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.107805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.107812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.107967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.107975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.108169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.108176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.108480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.108486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.108770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.108777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.109108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.109115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.109336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.109343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.109513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.109520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.109803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.109810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.110037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.110044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.110353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.110361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.110759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.110766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.111084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.111091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.111408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.111414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.111785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.111793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.112148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.112155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.112471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.112478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.112800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.112806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.113138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.113144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.113479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.113486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.113799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.113807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.114005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.114012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.114177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.114183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.114502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.114509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.114544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.114552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.114800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.114807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.115104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.115111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.115413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.115419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.115455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.115462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.115733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.115740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.115910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.115917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.116176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.116183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.116391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.116398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.517 [2024-10-14 14:42:48.116611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.517 [2024-10-14 14:42:48.116619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.517 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.116981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.116988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.117149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.117156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.117306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.117312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.117639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.117646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.117936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.117943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.118202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.118210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.118581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.118588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.118744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.118751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.119024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.119031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.119307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.119313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.119604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.119611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.119897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.119903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.120313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.120320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.120610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.120617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.120896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.120903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.121211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.121219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.121527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.121534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.121693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.121701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.122009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.122016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.122213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.122220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.122474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.122481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.122806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.122813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.122983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.122989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.123163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.123170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.123456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.123463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.123631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.123637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.123923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.123930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.124107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.124115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.124454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.124461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.124771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.124778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.125106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.125113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.125274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.125282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.125457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.125464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.125758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.125765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.126059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.126068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.126252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.126259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.126458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.126465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.126619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.126627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.126815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.126823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.127014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.518 [2024-10-14 14:42:48.127023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.518 qpair failed and we were unable to recover it. 00:29:07.518 [2024-10-14 14:42:48.127181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.127189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.127490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.127497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.127811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.127818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.127979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.127986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.128106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.128113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.128405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.128411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.128736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.128743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.129043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.129049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.129390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.129397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.129588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.129595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.129752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.129759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.129974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.129982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.130249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.130256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.130554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.130561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.130791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.130799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.131193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.131201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.131399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.131406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.131690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.131698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.131734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.131742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.131819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.131826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.131982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.131988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.132280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.132287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.132601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.132608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.132895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.132901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.133197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.133204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.133522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.133529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.133716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.133722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.133938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.133945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.134239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.134246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.134581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.134589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.134883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.134890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.135193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.135201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.135470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.135477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.135789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.135795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.136078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.136085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.136389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.136396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.519 [2024-10-14 14:42:48.136563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.519 [2024-10-14 14:42:48.136570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.519 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.136916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.136923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.137129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.137136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.137453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.137460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.137635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.137643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.137919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.137926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.138233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.138240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.138567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.138574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.138824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.138831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.139006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.139013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.139166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.139173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.139505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.139512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.139871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.139878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.140201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.140208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.140543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.140551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.140880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.140888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.141059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.141069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.141257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.141264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.141442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.141449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.141730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.141736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.141951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.141957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.142112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.142119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.142414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.142420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.142594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.142600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.142896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.142903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.143211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.143218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.143526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.143533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.143839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.143846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.144088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.144096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.144396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.144403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.144699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.144708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.145033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.145040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.145268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.145283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.145564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.145571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.145861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.145868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.146191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.146198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.146525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.146531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.146857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.146863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.147166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.147174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.147314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.147321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.520 [2024-10-14 14:42:48.147529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.520 [2024-10-14 14:42:48.147535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.520 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.147807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.147814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.147976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.147983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.148049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.148056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.148365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.148372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.148654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.148661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.148951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.148958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.149258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.149265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.149582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.149588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.149898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.149905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.150230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.150238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.150609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.150617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.150917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.150925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.151229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.151236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.151524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.151531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.151857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.151863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.152071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.152079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.152407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.152414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.152568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.152575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.152833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.152840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.153105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.153112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.153338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.153345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.153632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.153638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.153959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.153966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.154273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.154281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.154586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.154592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.154904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.154911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.155216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.155223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.155549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.155555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.155881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.155888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.156195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.156204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.156374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.156382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.156656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.156663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.156957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.156965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.157020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.157026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.157348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.157355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.157678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.157685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.157866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.157873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.158193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.158200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.158367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.158374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.158607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.521 [2024-10-14 14:42:48.158614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.521 qpair failed and we were unable to recover it. 00:29:07.521 [2024-10-14 14:42:48.158822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.158829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.158865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.158872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.159170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.159177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.159216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.159222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.159363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.159370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.159752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.159760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.159936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.159943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.160156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.160162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.160477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.160483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.160782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.160789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.160954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.160962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.161252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.161259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.161594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.161600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.161759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.161766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.161988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.161995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.162167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.162174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.162217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.162225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.162426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.162432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.162607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.162614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.162843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.162850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.163023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.163031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.163210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.163217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.163374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.163380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.163655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.163662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.163892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.163899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.164119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.164126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.164421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.164427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.164710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.164717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.165000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.165007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.165183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.165191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.165402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.165409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.165713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.165720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.166005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.166012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.166302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.166308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.166627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.166634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.166816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.166824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.167018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.167026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.167216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.167223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.167685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.167693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.168017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.522 [2024-10-14 14:42:48.168024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.522 qpair failed and we were unable to recover it. 00:29:07.522 [2024-10-14 14:42:48.168341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.168348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.168655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.168662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.168968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.168974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.169289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.169296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.169625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.169631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.169960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.169968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.170271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.170279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.170483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.170490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.170790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.170798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.171112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.171119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.171417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.171424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.171751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.171758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.172101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.172108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.172424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.172431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.172733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.172739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.172965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.172972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.173302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.173309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.173603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.173611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.173829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.173836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.174198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.174205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.174389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.174397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.174685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.174692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.174900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.174907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.175230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.175238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.175550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.175557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.175592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.175599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.175761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.175768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.175993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.176000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.176313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.176320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.176611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.176621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.176941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.176948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.177270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.177279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.177578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.177586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.177755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.177763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.178078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.178086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.178389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.178397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.523 [2024-10-14 14:42:48.178713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.523 [2024-10-14 14:42:48.178720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.523 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.179014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.179021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.179339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.179346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.179670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.179676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.180000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.180007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.180364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.180371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.180684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.180691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.181002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.181009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.181209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.181217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.181505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.181513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.181807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.181815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.181968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.181976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.182267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.182275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.182578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.182584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.182876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.182882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.183044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.183050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.183366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.183373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.183575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.183583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.183858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.183865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.184043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.184050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.184180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.184186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.184362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.184369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.184569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.184576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.184896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.184903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.185082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.185089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.185286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.185292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.185595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.185601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.185807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.185814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.185940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.185947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.186263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.186270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.186453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.186461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.186624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.186630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.186923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.186929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.187141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.187149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.187346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.187352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.187393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.187400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.187710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.187717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.188002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.188009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.188314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.188321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.188506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.188513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.188828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.524 [2024-10-14 14:42:48.188835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.524 qpair failed and we were unable to recover it. 00:29:07.524 [2024-10-14 14:42:48.189145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.189152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.189470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.189478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.189713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.189721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.190045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.190053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.190357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.190365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.190518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.190525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.190798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.190805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.191120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.191127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.191442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.191448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.191738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.191745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.191925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.191931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.192244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.192251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.192634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.192642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.193027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.193036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.193366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.193375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.193720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.193728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.194015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.194022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.194177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.194184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.194338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.194346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.194630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.194637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.194947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.194953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.195216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.195224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.195586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.195592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.195884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.195891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.196330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.196338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.196628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.196636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.196934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.196942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.197226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.197234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.197524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.197531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.197851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.197858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.198054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.198064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.198155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.198162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.198338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.198346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.198711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.198717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.199032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.199039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.199386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.199394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.199684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.199691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.199865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.199872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.200219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.200226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.200419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.525 [2024-10-14 14:42:48.200428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.525 qpair failed and we were unable to recover it. 00:29:07.525 [2024-10-14 14:42:48.200618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.200625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.200770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.200777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.200948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.200954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.201160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.201168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.201380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.201388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.201590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.201598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.201829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.201837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.202149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.202156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.202471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.202478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.202648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.202655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.202934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.202941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.203237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.203244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.203533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.203541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.203756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.203763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.204080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.204088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.204456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.204463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.204754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.204761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.204926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.204933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.205187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.205195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.205364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.205371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.205642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.205649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.206041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.206048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.206355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.206363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.206555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.206562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.206918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.206926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.207232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.207239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.207540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.207548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.207860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.207868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.208179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.208186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.208506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.208513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.208821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.208828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.209154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.209161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.209578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.209587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.209870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.209877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.210214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.210222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.210407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.210414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.210766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.210774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.211102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.211109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.211437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.211444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.211746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.526 [2024-10-14 14:42:48.211753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.526 qpair failed and we were unable to recover it. 00:29:07.526 [2024-10-14 14:42:48.212077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.212084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-10-14 14:42:48.212408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.212415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-10-14 14:42:48.212592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.212599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-10-14 14:42:48.212759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.212767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-10-14 14:42:48.213086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.213094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-10-14 14:42:48.213301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.213308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-10-14 14:42:48.213578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.213586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-10-14 14:42:48.213769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.213776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-10-14 14:42:48.213945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.213952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-10-14 14:42:48.214359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.214366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-10-14 14:42:48.214536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.214545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-10-14 14:42:48.214735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.214742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-10-14 14:42:48.215002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.215009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-10-14 14:42:48.215163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.215169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.527 [2024-10-14 14:42:48.215334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.527 [2024-10-14 14:42:48.215341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.527 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.215513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.215521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.215777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.215785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.215966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.215973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.216020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.216028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.216239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.216247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.216424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.216431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.216715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.216722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.216885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.216893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.217186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.217193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.217503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.217510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.217830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.217837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.218170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.218177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.218384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.218391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.218483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.218490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.218760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.218767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.219053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.219060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.219461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.219468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.219635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.219644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.219926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.219933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.220230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.220237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.220271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.220277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.220528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.220534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.220829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.220836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.221124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.221132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.221443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.221450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.221761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.221767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.222078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.222085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.222402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.222409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.222579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.222586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.222738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.222745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.222952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.222959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.223291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.223298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.223610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.223617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.804 [2024-10-14 14:42:48.223786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.804 [2024-10-14 14:42:48.223792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.804 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.224072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.224079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.224358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.224365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.224700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.224707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.225040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.225047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.225334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.225342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.225668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.225675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.225988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.225995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.226296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.226303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.226626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.226633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.226964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.226972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.227160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.227168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.227496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.227504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.227694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.227702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.228009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.228016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.228336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.228343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.228675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.228681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.228852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.228859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.229235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.229242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.229547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.229554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.229848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.229855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.230010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.230016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.230172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.230178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.230519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.230526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.230771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.230780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.231088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.231095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.231398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.231404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.231623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.231630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.231830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.231836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.232202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.232209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.232515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.232522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.232811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.232818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.233016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.233030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.233374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.233381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.233554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.233561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.233721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.233728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.233901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.233908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.234224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.234231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.234540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.234547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.234853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.234860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.805 [2024-10-14 14:42:48.235141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.805 [2024-10-14 14:42:48.235149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.805 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.235461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.235469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.235654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.235661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.235833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.235839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.236013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.236019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.236061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.236070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.236217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.236224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.236530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.236537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.236842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.236850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.237027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.237035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.237338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.237346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.237841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.237884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.238266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.238303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.238516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.238529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.238882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.238893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.239303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.239340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.239650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.239662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.239983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.239995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.240190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.240202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.240575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.240586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.240944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.240954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.241285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.241296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.241581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.241591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.241908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.241918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.242133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.242143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.242381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.242391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.242676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.242686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.242981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.242990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.243277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.243287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.243470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.243480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.243749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.243760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.243949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.243960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.244135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.244146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.244344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.244355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.244571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.244581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.244767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.244778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.244829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.244841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.245116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.245127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.245422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.245435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.245734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.245744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.246079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.246090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-10-14 14:42:48.246407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.806 [2024-10-14 14:42:48.246417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.246712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.246722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.247013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.247023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.247320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.247331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.247523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.247539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.247857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.247868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.248199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.248210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.248554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.248564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.248886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.248896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.249229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.249240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.249545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.249555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.249884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.249894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.250194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.250204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.250370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.250381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.250700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.250710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.250995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.251005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.251313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.251325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.251633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.251643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.251951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.251962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.252259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.252269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.252588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.252597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.252923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.252933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.253249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.253260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.253567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.253578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.253886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.253896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.254182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.254193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.254512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.254522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.254824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.254833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.255126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.255136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.255427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.255437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.255621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.255632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.255802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.255811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.256006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.256016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.256193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.256203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.256412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.256421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.256705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.256715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.256916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.256926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.257239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.257250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.257598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.257609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.257896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.257906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.258218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.258228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-10-14 14:42:48.258393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.807 [2024-10-14 14:42:48.258403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.258765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.258776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.258962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.258972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.259162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.259172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.259489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.259499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.259775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.259785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.259959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.259969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.260201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.260213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.260395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.260405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.264338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.264350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.264751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.264761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.265077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.265089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.265294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.265303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.265658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.265669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.266002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.266012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.266314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.266325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.266648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.266659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.266979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.266989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.267314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.267324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.267609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.267620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.267828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.267839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.268150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.268161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.268462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.268472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.268757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.268767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.269050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.269066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.269428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.269437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.269724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.269735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.270018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.270029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.270214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.270225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.270481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.270491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.270654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.270663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.270833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.270843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.271029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.271039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.271346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.271356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.271666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.271675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.808 qpair failed and we were unable to recover it. 00:29:07.808 [2024-10-14 14:42:48.271837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.808 [2024-10-14 14:42:48.271847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.272121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.272131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.272498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.272508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.272818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.272828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.273119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.273129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.273439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.273449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.273655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.273666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.273906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.273917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.274088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.274100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.274300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.274310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.274599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.274609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.274796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.274805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.274896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.274906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.275088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.275098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.275410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.275420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.275584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.275594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.275964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.275974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.276285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.276295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.276586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.276596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.276912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.276922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.277253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.277263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.277613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.277623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.277919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.277930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.278275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.278287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.278658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.278669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.278932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.278943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.279352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.279366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.279672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.279682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.279992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.280002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.280317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.280328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.280506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.280517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.280847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.280857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.281132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.281143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.281463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.281473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.281760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.281770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.282069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.282080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.282391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.282402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.282711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.282721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.282905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.282916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.283272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.283283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.283446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.283456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.809 qpair failed and we were unable to recover it. 00:29:07.809 [2024-10-14 14:42:48.283837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.809 [2024-10-14 14:42:48.283847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.284067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.284078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.284401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.284411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.284741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.284752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.285086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.285097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.285429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.285439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.285658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.285669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.285941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.285951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.286274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.286285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.286558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.286568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.286881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.286891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.287179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.287189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.287517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.287528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.287568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.287578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.287978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.287989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.288291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.288302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.288590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.288602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.288925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.288936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.289234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.289246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.289621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.289632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.289928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.289939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.290239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.290249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.290573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.290583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.290936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.290947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.291228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.291238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.291570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.291580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.291872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.291883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.292193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.292204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.292535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.292546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.292841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.292852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.293169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.293179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.293494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.293504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.293793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.293803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.293965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.293976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.294198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.294208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.294513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.294523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.294833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.294844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.295134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.295145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.295429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.295440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.295746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.295757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.295964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.810 [2024-10-14 14:42:48.295975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.810 qpair failed and we were unable to recover it. 00:29:07.810 [2024-10-14 14:42:48.296342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.296353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.296560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.296570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.296905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.296915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.297082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.297101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.297401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.297411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.297710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.297721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.297924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.297936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.298122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.298133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.298298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.298308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.298477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.298487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.298657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.298667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.299026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.299038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.299220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.299231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.299414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.299425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.299716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.299727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.299910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.299921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.300076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.300091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.300378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.300388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.300704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.300714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.300915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.300926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.301269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.301280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.301615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.301625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.301910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.301920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.302097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.302108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.302157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.302167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.302466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.302476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.302650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.302661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.302706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.302717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.302895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.302906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.303263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.303274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.303594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.303604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.303902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.303913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.304077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.304088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.304378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.304388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.304670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.304680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.304980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.304991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.305383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.305394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.305696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.305707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.305990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.306001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.306352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.306363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.811 [2024-10-14 14:42:48.306727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.811 [2024-10-14 14:42:48.306737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.811 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.307047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.307057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.307245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.307256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.307455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.307468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.307821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.307832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.308155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.308166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.308480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.308490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.308787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.308798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.309002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.309013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.309341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.309352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.309526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.309536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.309710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.309721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.309998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.310008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.310129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.310139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.310325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.310335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.310502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.310512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.310736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.310746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.310922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.310934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.311110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.311121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.311411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.311422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.311757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.311767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.312157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.312168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.312377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.312388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.312671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.312681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.313004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.313014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.313414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.313425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.313709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.313720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.313991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.314002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.314317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.314329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.314685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.314695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.314952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.314962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.315303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.315313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.315478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.315489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.315763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.315773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.316150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.316161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.316349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.316359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.812 [2024-10-14 14:42:48.316675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.812 [2024-10-14 14:42:48.316685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.812 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.317004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.317014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.317313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.317326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.317676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.317687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.317876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.317887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.318208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.318218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.318510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.318520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.318817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.318827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.319137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.319150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.319477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.319487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.319775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.319785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.320098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.320108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.320403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.320414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.320717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.320728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.320944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.320955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.321167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.321178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.321364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.321375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.321700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.321710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.321786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.321796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.322127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.322138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.322450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.322461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.322777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.322787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.322982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.322992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.323313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.323323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.323643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.323653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.323941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.323952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.324236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.324247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.324564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.324575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.324888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.324898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.325195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.325207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.325513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.325523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.325842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.325851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.326140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.326150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.326468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.326478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.326645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.326655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.327041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.327052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.327377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.327388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.327701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.327710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.327880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.327889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.328160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.328170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.328540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.328551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.813 [2024-10-14 14:42:48.328898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.813 [2024-10-14 14:42:48.328908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.813 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.329218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.329228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.329533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.329543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.329837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.329846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.330009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.330018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.330245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.330255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.330553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.330563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.330851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.330862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.331173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.331183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.331387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.331404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.331566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.331576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.331900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.331910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.332236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.332247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.332576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.332585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.332740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.332749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.333078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.333088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.333462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.333472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.333666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.333682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.334008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.334018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.334351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.334361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.334675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.334685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.335069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.335079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.335471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.335481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.335829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.335838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.336128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.336139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.336244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.336254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.336541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.336550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.336869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.336879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.337198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.337208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.337251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.337260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.337339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.337348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.337661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.337671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.337956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.337965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.338184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.338194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.338530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.338540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.338897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.338909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.339192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.339202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.339522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.339531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.339691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.339700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.340014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.340023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.340176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.340186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.814 qpair failed and we were unable to recover it. 00:29:07.814 [2024-10-14 14:42:48.340407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.814 [2024-10-14 14:42:48.340417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.340604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.340614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.340944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.340954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.341121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.341131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.341434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.341444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.341622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.341633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.341923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.341933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.342254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.342264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.342497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.342507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.342832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.342841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.343175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.343185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.343362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.343373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.343675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.343685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.343981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.343990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.344308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.344318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.344627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.344636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.344796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.344805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.345058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.345072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.345383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.345393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.345726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.345735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.346059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.346072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.346392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.346406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.346583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.346594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.346858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.346869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.347176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.347186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.347375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.347385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.347714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.347724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.348140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.348150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.348312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.348323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.348652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.348662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.348993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.349002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.349386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.349396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.349580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.349589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.349929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.349938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.350272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.350282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.350601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.350611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.350995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.351005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.351170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.351181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.351503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.351513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.351812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.351822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.352134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.815 [2024-10-14 14:42:48.352144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.815 qpair failed and we were unable to recover it. 00:29:07.815 [2024-10-14 14:42:48.352548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.352558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.352862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.352873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.353189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.353200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.353478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.353489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.353680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.353690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.354003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.354014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.354315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.354325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.354611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.354621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.354909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.354920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.355230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.355240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.355539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.355549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.355849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.355858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.356050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.356061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.356284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.356295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.356637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.356647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.356801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.356811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.357211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.357220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.357269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.357278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.357554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.357563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.357858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.357868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.358044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.358054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.358438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.358451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.358633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.358643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.358969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.358980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.359297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.359307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.359598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.359608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.359938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.359948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.360231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.360241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.360427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.360437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.360671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.360681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.360961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.360970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.361171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.361181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.361456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.361467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.361773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.361783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.362081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.362093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.816 [2024-10-14 14:42:48.362477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.362489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:07.816 [2024-10-14 14:42:48.362800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.362810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:07.816 [2024-10-14 14:42:48.363085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.363097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:07.816 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.816 [2024-10-14 14:42:48.363417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.816 [2024-10-14 14:42:48.363428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.816 qpair failed and we were unable to recover it. 00:29:07.816 [2024-10-14 14:42:48.363592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.363601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.363887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.363898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.364184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.364194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.364553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.364564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.364848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.364858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.365021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.365032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.365332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.365342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.365642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.365655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.365814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.365823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.366164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.366174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.366475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.366485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.366763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.366773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.367103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.367114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.367317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.367327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.367647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.367658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.367940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.367951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.368250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.368260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.368594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.368604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.368882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.368891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.369178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.369189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.369501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.369511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.369797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.369808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.369978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.369988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.370374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.370385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.370734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.370744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.371034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.371045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.371366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.371377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.371663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.371673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.371996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.372007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.372151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.372162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.372335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.372346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.372504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.372515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.372560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.372570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.372873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.372883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.373197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.373207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.373530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.373541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.817 qpair failed and we were unable to recover it. 00:29:07.817 [2024-10-14 14:42:48.373715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.817 [2024-10-14 14:42:48.373725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.374030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.374041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.374433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.374443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.374665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.374676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.375004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.375014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.375304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.375314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.375474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.375486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.375798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.375809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.376130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.376143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.376473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.376483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.376791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.376801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.377092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.377101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.377407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.377419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.377727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.377737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.378085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.378097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.378275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.378286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.378607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.378617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.378964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.378975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.379289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.379299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.379506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.379516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.379829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.379839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.380133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.380143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.380443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.380453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.380758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.380767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.380936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.380946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.381133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.381143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.381408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.381419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.381724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.381733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.381900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.381910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.382138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.382148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.382493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.382504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.382712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.382723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.382931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.382941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.383124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.383133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.383501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.383511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.383812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.383822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.383983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.383993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.384254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.384265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.384625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.384636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.384685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.384696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.384867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.818 [2024-10-14 14:42:48.384878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.818 qpair failed and we were unable to recover it. 00:29:07.818 [2024-10-14 14:42:48.385035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.385046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.385225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.385235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.385425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.385435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.385599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.385609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.385911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.385923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.386114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.386125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.386456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.386466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.386791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.386801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.387092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.387102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.387387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.387397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.387565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.387576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.387880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.387890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.388211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.388221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.388539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.388550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.388880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.388890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.389176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.389187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.389379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.389390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.389708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.389718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.390005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.390015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.390334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.390345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.390606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.390616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.390916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.390926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.391231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.391241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.391409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.391418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.391704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.391715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.391986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.391996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.392355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.392366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.392676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.392688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.393084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.393094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.393396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.393406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.393704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.393714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.394008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.394018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.394317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.394326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.394725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.394735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.394949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.394959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.395123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.395132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.395459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.395469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.395759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.395769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.396017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.396027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.396360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.396373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.396720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.819 [2024-10-14 14:42:48.396731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.819 qpair failed and we were unable to recover it. 00:29:07.819 [2024-10-14 14:42:48.396781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.396791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.396973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.396984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.397059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.397076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.397398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.397408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.397607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.397617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.397957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.397968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.398295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.398306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.398514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.398524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.398869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.398879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.399166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.399177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.399371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.399382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.399730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.399740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.400089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.400100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.400458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.400468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.400770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.400781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.401109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.401119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.401439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.401448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.401736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.401747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.820 [2024-10-14 14:42:48.401791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.401810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.401953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.401963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:07.820 [2024-10-14 14:42:48.402318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.402329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.402515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.402526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.820 [2024-10-14 14:42:48.402713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.402726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.820 [2024-10-14 14:42:48.403037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.403051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.403253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.403263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.403595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.403606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.403915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.403926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.404235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.404245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.404511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.404520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.404682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.404691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.404881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.404891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.405060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.405074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.405369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.405380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.405696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.405706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.405868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.405878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.406073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.406083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.406443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.406453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.406766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.406776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.407095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.820 [2024-10-14 14:42:48.407105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.820 qpair failed and we were unable to recover it. 00:29:07.820 [2024-10-14 14:42:48.407425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.407435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.407643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.407652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.407915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.407926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.408232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.408242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.408551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.408561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.408872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.408883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.408923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.408934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.409227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.409237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.409311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.409320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.409533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.409543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.409752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.409761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.410111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.410121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.410319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.410329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.410500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.410510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.410901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.410910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.411207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.411217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.411417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.411427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.411767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.411777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.412088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.412099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.412311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.412321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.412649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.412659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.412823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.412834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.413064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.413075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.413264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.413275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.413447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.413456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.413621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.413633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.413821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.413831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.414024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.414035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.414324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.414334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.414619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.414629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.414927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.414936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.415232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.415242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.415422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.415432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.415857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.415866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.416087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.416097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.416418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.821 [2024-10-14 14:42:48.416428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.821 qpair failed and we were unable to recover it. 00:29:07.821 [2024-10-14 14:42:48.416757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.416767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.417075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.417086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.417319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.417329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.417721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.417731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.417947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.417957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.418213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.418223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.418504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.418513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.418904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.418914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.419216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.419226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.419585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.419595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.419912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.419921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.420273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.420284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.420593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.420603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.420911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.420921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.421152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.421163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.421463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.421472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.421664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.421681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.422000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.422010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.422195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.422205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.422529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.422539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.422870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.422880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.423185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.423195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.423491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.423501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.423884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.423894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.424206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.424216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.424536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.424546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.424879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.424889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.425077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.425089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.425278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.425288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.425433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.425443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.425643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.425653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.425793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.425803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.426136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.426146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.426431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.426441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.426617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.426627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.426936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.426946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.427238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.427248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.427431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.427441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.427731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.427740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.427869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.822 [2024-10-14 14:42:48.427878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.822 qpair failed and we were unable to recover it. 00:29:07.822 [2024-10-14 14:42:48.428013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.428022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.428397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.428407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.428587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.428597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 Malloc0 00:29:07.823 [2024-10-14 14:42:48.428892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.428902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.429114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.429124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.429281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.429290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.429547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.429557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.823 [2024-10-14 14:42:48.429869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.429879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:07.823 [2024-10-14 14:42:48.430089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.430099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.430282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.430292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.823 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.823 [2024-10-14 14:42:48.430578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.430589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.430932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.430942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.431231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.431241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.431457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.431467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.431664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.431674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.431834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.431843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.432153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.432163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.432476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.432486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.432813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.432823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.433121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.433132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.433441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.433451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.433762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.433771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.434000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.434010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.434310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.434320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.434605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.434615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.434923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.434932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.435240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.435250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.435565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.435574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.435899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.435909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.436208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.436218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.436252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.823 [2024-10-14 14:42:48.436387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.436397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.436627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.436637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.436817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.436828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.437122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.437132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.437371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.437381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.437566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.437575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.437954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.437963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.438258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.438268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.823 [2024-10-14 14:42:48.438668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.823 [2024-10-14 14:42:48.438678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.823 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.438867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.438877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.439229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.439239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.439526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.439536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.439848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.439861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.440179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.440189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.440475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.440485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.440800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.440809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.441105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.441115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.441444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.441454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.441749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.441759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.442055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.442068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.442385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.442395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.442706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.442716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.443075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.443085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.443403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.443412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.443591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.443601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.443811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.443821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.444134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.444144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.444463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.444473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.444681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.444690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.445020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.445030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.824 [2024-10-14 14:42:48.445348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.445359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:07.824 [2024-10-14 14:42:48.445675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.445686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.824 [2024-10-14 14:42:48.446002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.446013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.824 [2024-10-14 14:42:48.446299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.446309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.446557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.446567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.446891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.446901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.447299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.447309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.447618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.447627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.447819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.447837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.448029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.448039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.448352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.448363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.448530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.448540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.448703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.448714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.448907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.448918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.449078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.449089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.449364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.449374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.449565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.449575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.449761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.824 [2024-10-14 14:42:48.449771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.824 qpair failed and we were unable to recover it. 00:29:07.824 [2024-10-14 14:42:48.450058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.450071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.450257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.450267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8de550 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.450734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.450763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.450941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.450954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.451335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.451363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.451574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.451582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.451740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.451747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.451937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.451945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.452152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.452160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.452336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.452344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.452509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.452515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.452672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.452679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.452999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.453006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.453281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.453288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.453517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.453525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.453829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.453835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.454127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.454134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.454455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.454462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.454785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.454792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.455126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.455134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.455397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.455405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.455589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.455596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.455865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.455871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.456185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.456192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.456545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.456552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.456900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.456907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.457317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.457325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.825 [2024-10-14 14:42:48.457626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.457633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:07.825 [2024-10-14 14:42:48.457932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.457939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.825 [2024-10-14 14:42:48.458110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.458125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.825 [2024-10-14 14:42:48.458541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.458548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.458848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.458854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.459020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.459027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.459319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.825 [2024-10-14 14:42:48.459327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.825 qpair failed and we were unable to recover it. 00:29:07.825 [2024-10-14 14:42:48.459590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.459597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.459920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.459927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.460259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.460266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.460602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.460609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.460896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.460903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.461217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.461224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.461552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.461559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.461888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.461897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.462210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.462218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.462533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.462541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.462864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.462872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.463191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.463198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.463525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.463532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.463826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.463832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.464005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.464012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.464298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.464304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.464609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.464615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.464929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.464935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.465273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.465280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.465571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.465579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.465750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.465758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.465928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.465935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.466183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.466191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.466379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.466386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.466593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.466601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.466916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.466923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.467233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.467240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.467495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.467501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.467797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.467804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.468021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.468035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.468376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.468383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.468670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.468677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.468934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.468941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.469143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.469150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.469196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.469202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.826 [2024-10-14 14:42:48.469501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.469508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 [2024-10-14 14:42:48.469681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.469688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.826 [2024-10-14 14:42:48.469919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.469927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.826 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.826 [2024-10-14 14:42:48.470099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.826 [2024-10-14 14:42:48.470106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.826 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.470154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.470161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.827 [2024-10-14 14:42:48.470310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.470317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.470494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.470502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.470850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.470857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.470934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.470940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.471081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.471089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.471130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.471138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.471422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.471429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.471747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.471754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.472086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.472093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.472385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.472392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.472757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.472764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.473071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.473078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.473409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.473416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.473744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.473750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.474013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.474020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.474311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.474318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.474527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.474533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.474785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.474791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.475104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.475111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.475426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.475433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.475757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.475764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.475937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.475944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.476259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.827 [2024-10-14 14:42:48.476266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe63c000b90 with addr=10.0.0.2, port=4420 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.476537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.827 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.827 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:07.827 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.827 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.827 [2024-10-14 14:42:48.487265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.827 [2024-10-14 14:42:48.487334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.827 [2024-10-14 14:42:48.487348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.827 [2024-10-14 14:42:48.487354] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.827 [2024-10-14 14:42:48.487358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:07.827 [2024-10-14 14:42:48.487373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.827 14:42:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3577633 00:29:07.827 [2024-10-14 14:42:48.497159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.827 [2024-10-14 14:42:48.497215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.827 [2024-10-14 14:42:48.497226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.827 [2024-10-14 14:42:48.497231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.827 [2024-10-14 14:42:48.497236] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:07.827 [2024-10-14 14:42:48.497246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.507153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.827 [2024-10-14 14:42:48.507206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.827 [2024-10-14 14:42:48.507216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.827 [2024-10-14 14:42:48.507221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.827 [2024-10-14 14:42:48.507225] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:07.827 [2024-10-14 14:42:48.507235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.827 qpair failed and we were unable to recover it. 00:29:07.827 [2024-10-14 14:42:48.517098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.827 [2024-10-14 14:42:48.517153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.827 [2024-10-14 14:42:48.517162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.827 [2024-10-14 14:42:48.517167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.827 [2024-10-14 14:42:48.517172] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:07.827 [2024-10-14 14:42:48.517181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.827 qpair failed and we were unable to recover it. 00:29:08.090 [2024-10-14 14:42:48.527021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.090 [2024-10-14 14:42:48.527112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.090 [2024-10-14 14:42:48.527122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.090 [2024-10-14 14:42:48.527127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.090 [2024-10-14 14:42:48.527131] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.090 [2024-10-14 14:42:48.527141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.090 qpair failed and we were unable to recover it. 00:29:08.090 [2024-10-14 14:42:48.537151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.090 [2024-10-14 14:42:48.537250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.090 [2024-10-14 14:42:48.537260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.090 [2024-10-14 14:42:48.537264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.090 [2024-10-14 14:42:48.537269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.090 [2024-10-14 14:42:48.537278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.090 qpair failed and we were unable to recover it. 00:29:08.090 [2024-10-14 14:42:48.547176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.090 [2024-10-14 14:42:48.547224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.090 [2024-10-14 14:42:48.547234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.090 [2024-10-14 14:42:48.547241] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.090 [2024-10-14 14:42:48.547246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.090 [2024-10-14 14:42:48.547256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.090 qpair failed and we were unable to recover it. 00:29:08.090 [2024-10-14 14:42:48.557200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.090 [2024-10-14 14:42:48.557254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.090 [2024-10-14 14:42:48.557263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.090 [2024-10-14 14:42:48.557268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.090 [2024-10-14 14:42:48.557272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.090 [2024-10-14 14:42:48.557282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.090 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.567282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.091 [2024-10-14 14:42:48.567344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.091 [2024-10-14 14:42:48.567353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.091 [2024-10-14 14:42:48.567358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.091 [2024-10-14 14:42:48.567362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.091 [2024-10-14 14:42:48.567372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.091 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.577253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.091 [2024-10-14 14:42:48.577300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.091 [2024-10-14 14:42:48.577310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.091 [2024-10-14 14:42:48.577315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.091 [2024-10-14 14:42:48.577320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.091 [2024-10-14 14:42:48.577329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.091 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.587301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.091 [2024-10-14 14:42:48.587349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.091 [2024-10-14 14:42:48.587359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.091 [2024-10-14 14:42:48.587364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.091 [2024-10-14 14:42:48.587369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.091 [2024-10-14 14:42:48.587378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.091 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.597316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.091 [2024-10-14 14:42:48.597369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.091 [2024-10-14 14:42:48.597380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.091 [2024-10-14 14:42:48.597384] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.091 [2024-10-14 14:42:48.597389] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.091 [2024-10-14 14:42:48.597398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.091 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.607223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.091 [2024-10-14 14:42:48.607324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.091 [2024-10-14 14:42:48.607334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.091 [2024-10-14 14:42:48.607338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.091 [2024-10-14 14:42:48.607343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.091 [2024-10-14 14:42:48.607352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.091 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.617343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.091 [2024-10-14 14:42:48.617402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.091 [2024-10-14 14:42:48.617412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.091 [2024-10-14 14:42:48.617417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.091 [2024-10-14 14:42:48.617421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.091 [2024-10-14 14:42:48.617431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.091 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.627398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.091 [2024-10-14 14:42:48.627448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.091 [2024-10-14 14:42:48.627458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.091 [2024-10-14 14:42:48.627462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.091 [2024-10-14 14:42:48.627467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.091 [2024-10-14 14:42:48.627476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.091 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.637422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.091 [2024-10-14 14:42:48.637509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.091 [2024-10-14 14:42:48.637518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.091 [2024-10-14 14:42:48.637526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.091 [2024-10-14 14:42:48.637530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.091 [2024-10-14 14:42:48.637540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.091 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.647452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.091 [2024-10-14 14:42:48.647505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.091 [2024-10-14 14:42:48.647515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.091 [2024-10-14 14:42:48.647520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.091 [2024-10-14 14:42:48.647524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.091 [2024-10-14 14:42:48.647534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.091 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.657465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.091 [2024-10-14 14:42:48.657511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.091 [2024-10-14 14:42:48.657521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.091 [2024-10-14 14:42:48.657526] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.091 [2024-10-14 14:42:48.657530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.091 [2024-10-14 14:42:48.657539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.091 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.667477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.091 [2024-10-14 14:42:48.667559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.091 [2024-10-14 14:42:48.667569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.091 [2024-10-14 14:42:48.667573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.091 [2024-10-14 14:42:48.667578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.091 [2024-10-14 14:42:48.667587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.091 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.677546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.091 [2024-10-14 14:42:48.677595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.091 [2024-10-14 14:42:48.677605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.091 [2024-10-14 14:42:48.677609] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.091 [2024-10-14 14:42:48.677614] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.091 [2024-10-14 14:42:48.677624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.091 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.687559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.091 [2024-10-14 14:42:48.687616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.091 [2024-10-14 14:42:48.687625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.091 [2024-10-14 14:42:48.687630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.091 [2024-10-14 14:42:48.687634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.091 [2024-10-14 14:42:48.687644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.091 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.697708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.091 [2024-10-14 14:42:48.697776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.091 [2024-10-14 14:42:48.697786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.091 [2024-10-14 14:42:48.697791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.091 [2024-10-14 14:42:48.697795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.091 [2024-10-14 14:42:48.697805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.091 qpair failed and we were unable to recover it. 00:29:08.091 [2024-10-14 14:42:48.707706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.092 [2024-10-14 14:42:48.707772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.092 [2024-10-14 14:42:48.707781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.092 [2024-10-14 14:42:48.707786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.092 [2024-10-14 14:42:48.707790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.092 [2024-10-14 14:42:48.707800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.092 qpair failed and we were unable to recover it. 00:29:08.092 [2024-10-14 14:42:48.717675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.092 [2024-10-14 14:42:48.717747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.092 [2024-10-14 14:42:48.717757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.092 [2024-10-14 14:42:48.717762] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.092 [2024-10-14 14:42:48.717766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.092 [2024-10-14 14:42:48.717776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.092 qpair failed and we were unable to recover it. 00:29:08.092 [2024-10-14 14:42:48.727717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.092 [2024-10-14 14:42:48.727770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.092 [2024-10-14 14:42:48.727782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.092 [2024-10-14 14:42:48.727787] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.092 [2024-10-14 14:42:48.727791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.092 [2024-10-14 14:42:48.727801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.092 qpair failed and we were unable to recover it. 00:29:08.092 [2024-10-14 14:42:48.737683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.092 [2024-10-14 14:42:48.737772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.092 [2024-10-14 14:42:48.737781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.092 [2024-10-14 14:42:48.737786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.092 [2024-10-14 14:42:48.737790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.092 [2024-10-14 14:42:48.737800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.092 qpair failed and we were unable to recover it. 00:29:08.092 [2024-10-14 14:42:48.747689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.092 [2024-10-14 14:42:48.747738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.092 [2024-10-14 14:42:48.747748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.092 [2024-10-14 14:42:48.747753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.092 [2024-10-14 14:42:48.747757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.092 [2024-10-14 14:42:48.747767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.092 qpair failed and we were unable to recover it. 00:29:08.092 [2024-10-14 14:42:48.757711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.092 [2024-10-14 14:42:48.757771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.092 [2024-10-14 14:42:48.757780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.092 [2024-10-14 14:42:48.757785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.092 [2024-10-14 14:42:48.757790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.092 [2024-10-14 14:42:48.757799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.092 qpair failed and we were unable to recover it. 00:29:08.092 [2024-10-14 14:42:48.767776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.092 [2024-10-14 14:42:48.767827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.092 [2024-10-14 14:42:48.767837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.092 [2024-10-14 14:42:48.767841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.092 [2024-10-14 14:42:48.767846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.092 [2024-10-14 14:42:48.767860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.092 qpair failed and we were unable to recover it. 00:29:08.092 [2024-10-14 14:42:48.777793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.092 [2024-10-14 14:42:48.777839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.092 [2024-10-14 14:42:48.777848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.092 [2024-10-14 14:42:48.777853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.092 [2024-10-14 14:42:48.777857] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.092 [2024-10-14 14:42:48.777867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.092 qpair failed and we were unable to recover it. 00:29:08.092 [2024-10-14 14:42:48.787814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.092 [2024-10-14 14:42:48.787863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.092 [2024-10-14 14:42:48.787882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.092 [2024-10-14 14:42:48.787888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.092 [2024-10-14 14:42:48.787893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.092 [2024-10-14 14:42:48.787906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.092 qpair failed and we were unable to recover it. 00:29:08.092 [2024-10-14 14:42:48.797843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.092 [2024-10-14 14:42:48.797893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.092 [2024-10-14 14:42:48.797904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.092 [2024-10-14 14:42:48.797909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.092 [2024-10-14 14:42:48.797914] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.092 [2024-10-14 14:42:48.797925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.092 qpair failed and we were unable to recover it. 00:29:08.092 [2024-10-14 14:42:48.807876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.092 [2024-10-14 14:42:48.807932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.092 [2024-10-14 14:42:48.807952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.092 [2024-10-14 14:42:48.807958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.092 [2024-10-14 14:42:48.807962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.092 [2024-10-14 14:42:48.807976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.092 qpair failed and we were unable to recover it. 00:29:08.092 [2024-10-14 14:42:48.817768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.092 [2024-10-14 14:42:48.817835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.092 [2024-10-14 14:42:48.817850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.092 [2024-10-14 14:42:48.817856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.092 [2024-10-14 14:42:48.817860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.092 [2024-10-14 14:42:48.817871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.092 qpair failed and we were unable to recover it. 00:29:08.355 [2024-10-14 14:42:48.827924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.355 [2024-10-14 14:42:48.827993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.355 [2024-10-14 14:42:48.828004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.355 [2024-10-14 14:42:48.828009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.355 [2024-10-14 14:42:48.828013] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.355 [2024-10-14 14:42:48.828024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.355 qpair failed and we were unable to recover it. 00:29:08.355 [2024-10-14 14:42:48.837935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.355 [2024-10-14 14:42:48.837989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.355 [2024-10-14 14:42:48.838007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.355 [2024-10-14 14:42:48.838013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.355 [2024-10-14 14:42:48.838018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.355 [2024-10-14 14:42:48.838032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.355 qpair failed and we were unable to recover it. 00:29:08.355 [2024-10-14 14:42:48.847978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.355 [2024-10-14 14:42:48.848028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.355 [2024-10-14 14:42:48.848039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.355 [2024-10-14 14:42:48.848044] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.355 [2024-10-14 14:42:48.848048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.355 [2024-10-14 14:42:48.848059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.355 qpair failed and we were unable to recover it. 00:29:08.355 [2024-10-14 14:42:48.857901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.857953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.356 [2024-10-14 14:42:48.857963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.356 [2024-10-14 14:42:48.857968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.356 [2024-10-14 14:42:48.857972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.356 [2024-10-14 14:42:48.857986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.356 qpair failed and we were unable to recover it. 00:29:08.356 [2024-10-14 14:42:48.867906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.867953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.356 [2024-10-14 14:42:48.867963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.356 [2024-10-14 14:42:48.867968] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.356 [2024-10-14 14:42:48.867972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.356 [2024-10-14 14:42:48.867982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.356 qpair failed and we were unable to recover it. 00:29:08.356 [2024-10-14 14:42:48.878054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.878110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.356 [2024-10-14 14:42:48.878120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.356 [2024-10-14 14:42:48.878125] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.356 [2024-10-14 14:42:48.878129] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.356 [2024-10-14 14:42:48.878139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.356 qpair failed and we were unable to recover it. 00:29:08.356 [2024-10-14 14:42:48.888083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.888130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.356 [2024-10-14 14:42:48.888139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.356 [2024-10-14 14:42:48.888144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.356 [2024-10-14 14:42:48.888149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.356 [2024-10-14 14:42:48.888158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.356 qpair failed and we were unable to recover it. 00:29:08.356 [2024-10-14 14:42:48.898124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.898171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.356 [2024-10-14 14:42:48.898180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.356 [2024-10-14 14:42:48.898185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.356 [2024-10-14 14:42:48.898189] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.356 [2024-10-14 14:42:48.898199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.356 qpair failed and we were unable to recover it. 00:29:08.356 [2024-10-14 14:42:48.908160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.908227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.356 [2024-10-14 14:42:48.908237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.356 [2024-10-14 14:42:48.908242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.356 [2024-10-14 14:42:48.908246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.356 [2024-10-14 14:42:48.908256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.356 qpair failed and we were unable to recover it. 00:29:08.356 [2024-10-14 14:42:48.918205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.918253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.356 [2024-10-14 14:42:48.918263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.356 [2024-10-14 14:42:48.918268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.356 [2024-10-14 14:42:48.918272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.356 [2024-10-14 14:42:48.918282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.356 qpair failed and we were unable to recover it. 00:29:08.356 [2024-10-14 14:42:48.928262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.928360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.356 [2024-10-14 14:42:48.928370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.356 [2024-10-14 14:42:48.928375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.356 [2024-10-14 14:42:48.928380] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.356 [2024-10-14 14:42:48.928391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.356 qpair failed and we were unable to recover it. 00:29:08.356 [2024-10-14 14:42:48.938115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.938168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.356 [2024-10-14 14:42:48.938178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.356 [2024-10-14 14:42:48.938183] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.356 [2024-10-14 14:42:48.938187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.356 [2024-10-14 14:42:48.938197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.356 qpair failed and we were unable to recover it. 00:29:08.356 [2024-10-14 14:42:48.948275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.948352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.356 [2024-10-14 14:42:48.948362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.356 [2024-10-14 14:42:48.948367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.356 [2024-10-14 14:42:48.948374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.356 [2024-10-14 14:42:48.948384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.356 qpair failed and we were unable to recover it. 00:29:08.356 [2024-10-14 14:42:48.958300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.958348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.356 [2024-10-14 14:42:48.958358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.356 [2024-10-14 14:42:48.958362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.356 [2024-10-14 14:42:48.958367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.356 [2024-10-14 14:42:48.958376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.356 qpair failed and we were unable to recover it. 00:29:08.356 [2024-10-14 14:42:48.968241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.968292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.356 [2024-10-14 14:42:48.968301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.356 [2024-10-14 14:42:48.968306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.356 [2024-10-14 14:42:48.968310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.356 [2024-10-14 14:42:48.968320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.356 qpair failed and we were unable to recover it. 00:29:08.356 [2024-10-14 14:42:48.978244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.978296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.356 [2024-10-14 14:42:48.978306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.356 [2024-10-14 14:42:48.978310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.356 [2024-10-14 14:42:48.978314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.356 [2024-10-14 14:42:48.978324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.356 qpair failed and we were unable to recover it. 00:29:08.356 [2024-10-14 14:42:48.988386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.988439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.356 [2024-10-14 14:42:48.988450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.356 [2024-10-14 14:42:48.988457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.356 [2024-10-14 14:42:48.988462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.356 [2024-10-14 14:42:48.988472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.356 qpair failed and we were unable to recover it. 00:29:08.356 [2024-10-14 14:42:48.998284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.356 [2024-10-14 14:42:48.998341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.357 [2024-10-14 14:42:48.998351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.357 [2024-10-14 14:42:48.998356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.357 [2024-10-14 14:42:48.998360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.357 [2024-10-14 14:42:48.998370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.357 qpair failed and we were unable to recover it. 00:29:08.357 [2024-10-14 14:42:49.008434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.357 [2024-10-14 14:42:49.008486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.357 [2024-10-14 14:42:49.008495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.357 [2024-10-14 14:42:49.008500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.357 [2024-10-14 14:42:49.008505] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.357 [2024-10-14 14:42:49.008515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.357 qpair failed and we were unable to recover it. 00:29:08.357 [2024-10-14 14:42:49.018508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.357 [2024-10-14 14:42:49.018578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.357 [2024-10-14 14:42:49.018588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.357 [2024-10-14 14:42:49.018593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.357 [2024-10-14 14:42:49.018597] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.357 [2024-10-14 14:42:49.018607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.357 qpair failed and we were unable to recover it. 00:29:08.357 [2024-10-14 14:42:49.028362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.357 [2024-10-14 14:42:49.028406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.357 [2024-10-14 14:42:49.028416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.357 [2024-10-14 14:42:49.028421] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.357 [2024-10-14 14:42:49.028425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.357 [2024-10-14 14:42:49.028435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.357 qpair failed and we were unable to recover it. 00:29:08.357 [2024-10-14 14:42:49.038447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.357 [2024-10-14 14:42:49.038498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.357 [2024-10-14 14:42:49.038507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.357 [2024-10-14 14:42:49.038514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.357 [2024-10-14 14:42:49.038519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.357 [2024-10-14 14:42:49.038528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.357 qpair failed and we were unable to recover it. 00:29:08.357 [2024-10-14 14:42:49.048443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.357 [2024-10-14 14:42:49.048493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.357 [2024-10-14 14:42:49.048503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.357 [2024-10-14 14:42:49.048508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.357 [2024-10-14 14:42:49.048512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.357 [2024-10-14 14:42:49.048522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.357 qpair failed and we were unable to recover it. 00:29:08.357 [2024-10-14 14:42:49.058562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.357 [2024-10-14 14:42:49.058608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.357 [2024-10-14 14:42:49.058618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.357 [2024-10-14 14:42:49.058623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.357 [2024-10-14 14:42:49.058627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.357 [2024-10-14 14:42:49.058637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.357 qpair failed and we were unable to recover it. 00:29:08.357 [2024-10-14 14:42:49.068620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.357 [2024-10-14 14:42:49.068670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.357 [2024-10-14 14:42:49.068679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.357 [2024-10-14 14:42:49.068684] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.357 [2024-10-14 14:42:49.068688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.357 [2024-10-14 14:42:49.068697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.357 qpair failed and we were unable to recover it. 00:29:08.357 [2024-10-14 14:42:49.078662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.357 [2024-10-14 14:42:49.078712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.357 [2024-10-14 14:42:49.078721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.357 [2024-10-14 14:42:49.078726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.357 [2024-10-14 14:42:49.078731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.357 [2024-10-14 14:42:49.078741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.357 qpair failed and we were unable to recover it. 00:29:08.621 [2024-10-14 14:42:49.088635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.621 [2024-10-14 14:42:49.088684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.621 [2024-10-14 14:42:49.088694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.621 [2024-10-14 14:42:49.088699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.621 [2024-10-14 14:42:49.088703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.621 [2024-10-14 14:42:49.088713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.621 qpair failed and we were unable to recover it. 00:29:08.621 [2024-10-14 14:42:49.098687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.621 [2024-10-14 14:42:49.098730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.621 [2024-10-14 14:42:49.098740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.621 [2024-10-14 14:42:49.098745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.621 [2024-10-14 14:42:49.098749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.621 [2024-10-14 14:42:49.098759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.621 qpair failed and we were unable to recover it. 00:29:08.621 [2024-10-14 14:42:49.108719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.621 [2024-10-14 14:42:49.108766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.621 [2024-10-14 14:42:49.108775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.621 [2024-10-14 14:42:49.108780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.621 [2024-10-14 14:42:49.108784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.621 [2024-10-14 14:42:49.108794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.621 qpair failed and we were unable to recover it. 00:29:08.621 [2024-10-14 14:42:49.118761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.621 [2024-10-14 14:42:49.118814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.621 [2024-10-14 14:42:49.118824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.621 [2024-10-14 14:42:49.118828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.621 [2024-10-14 14:42:49.118832] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.621 [2024-10-14 14:42:49.118842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.621 qpair failed and we were unable to recover it. 00:29:08.621 [2024-10-14 14:42:49.128701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.621 [2024-10-14 14:42:49.128754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.621 [2024-10-14 14:42:49.128764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.621 [2024-10-14 14:42:49.128771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.621 [2024-10-14 14:42:49.128776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.621 [2024-10-14 14:42:49.128786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.621 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-14 14:42:49.138827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.622 [2024-10-14 14:42:49.138874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.622 [2024-10-14 14:42:49.138893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.622 [2024-10-14 14:42:49.138899] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.622 [2024-10-14 14:42:49.138904] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.622 [2024-10-14 14:42:49.138917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-14 14:42:49.148861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.622 [2024-10-14 14:42:49.148913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.622 [2024-10-14 14:42:49.148932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.622 [2024-10-14 14:42:49.148938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.622 [2024-10-14 14:42:49.148943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.622 [2024-10-14 14:42:49.148957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-14 14:42:49.158872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.622 [2024-10-14 14:42:49.158927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.622 [2024-10-14 14:42:49.158938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.622 [2024-10-14 14:42:49.158943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.622 [2024-10-14 14:42:49.158948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.622 [2024-10-14 14:42:49.158959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-14 14:42:49.168911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.622 [2024-10-14 14:42:49.168962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.622 [2024-10-14 14:42:49.168972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.622 [2024-10-14 14:42:49.168977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.622 [2024-10-14 14:42:49.168981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.622 [2024-10-14 14:42:49.168991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-14 14:42:49.178936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.622 [2024-10-14 14:42:49.178983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.622 [2024-10-14 14:42:49.178994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.622 [2024-10-14 14:42:49.178998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.622 [2024-10-14 14:42:49.179003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.622 [2024-10-14 14:42:49.179013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-14 14:42:49.188941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.622 [2024-10-14 14:42:49.188986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.622 [2024-10-14 14:42:49.188995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.622 [2024-10-14 14:42:49.189000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.622 [2024-10-14 14:42:49.189005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.622 [2024-10-14 14:42:49.189014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-14 14:42:49.198984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.622 [2024-10-14 14:42:49.199042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.622 [2024-10-14 14:42:49.199052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.622 [2024-10-14 14:42:49.199057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.622 [2024-10-14 14:42:49.199061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.622 [2024-10-14 14:42:49.199075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-14 14:42:49.208989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.622 [2024-10-14 14:42:49.209083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.622 [2024-10-14 14:42:49.209093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.622 [2024-10-14 14:42:49.209098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.622 [2024-10-14 14:42:49.209102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.622 [2024-10-14 14:42:49.209112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-14 14:42:49.219038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.622 [2024-10-14 14:42:49.219117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.622 [2024-10-14 14:42:49.219129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.622 [2024-10-14 14:42:49.219134] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.622 [2024-10-14 14:42:49.219139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.622 [2024-10-14 14:42:49.219149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-14 14:42:49.229077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.622 [2024-10-14 14:42:49.229121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.622 [2024-10-14 14:42:49.229131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.622 [2024-10-14 14:42:49.229136] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.622 [2024-10-14 14:42:49.229140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.622 [2024-10-14 14:42:49.229150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-14 14:42:49.238973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.622 [2024-10-14 14:42:49.239024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.622 [2024-10-14 14:42:49.239035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.622 [2024-10-14 14:42:49.239040] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.622 [2024-10-14 14:42:49.239044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.622 [2024-10-14 14:42:49.239054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-14 14:42:49.249007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.622 [2024-10-14 14:42:49.249076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.622 [2024-10-14 14:42:49.249087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.622 [2024-10-14 14:42:49.249092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.622 [2024-10-14 14:42:49.249096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.622 [2024-10-14 14:42:49.249106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-14 14:42:49.259148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.622 [2024-10-14 14:42:49.259199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.622 [2024-10-14 14:42:49.259210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.622 [2024-10-14 14:42:49.259215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.622 [2024-10-14 14:42:49.259219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.622 [2024-10-14 14:42:49.259232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.622 qpair failed and we were unable to recover it. 00:29:08.622 [2024-10-14 14:42:49.269173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.622 [2024-10-14 14:42:49.269232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.622 [2024-10-14 14:42:49.269242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.622 [2024-10-14 14:42:49.269247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.622 [2024-10-14 14:42:49.269251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.623 [2024-10-14 14:42:49.269261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-14 14:42:49.279215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.623 [2024-10-14 14:42:49.279266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.623 [2024-10-14 14:42:49.279276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.623 [2024-10-14 14:42:49.279280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.623 [2024-10-14 14:42:49.279285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.623 [2024-10-14 14:42:49.279295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-14 14:42:49.289131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.623 [2024-10-14 14:42:49.289187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.623 [2024-10-14 14:42:49.289197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.623 [2024-10-14 14:42:49.289202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.623 [2024-10-14 14:42:49.289206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.623 [2024-10-14 14:42:49.289216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-14 14:42:49.299280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.623 [2024-10-14 14:42:49.299362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.623 [2024-10-14 14:42:49.299371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.623 [2024-10-14 14:42:49.299376] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.623 [2024-10-14 14:42:49.299381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.623 [2024-10-14 14:42:49.299390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-14 14:42:49.309310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.623 [2024-10-14 14:42:49.309357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.623 [2024-10-14 14:42:49.309370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.623 [2024-10-14 14:42:49.309375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.623 [2024-10-14 14:42:49.309379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.623 [2024-10-14 14:42:49.309390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-14 14:42:49.319244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.623 [2024-10-14 14:42:49.319294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.623 [2024-10-14 14:42:49.319304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.623 [2024-10-14 14:42:49.319309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.623 [2024-10-14 14:42:49.319313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.623 [2024-10-14 14:42:49.319322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-14 14:42:49.329376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.623 [2024-10-14 14:42:49.329427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.623 [2024-10-14 14:42:49.329437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.623 [2024-10-14 14:42:49.329442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.623 [2024-10-14 14:42:49.329446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.623 [2024-10-14 14:42:49.329455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-14 14:42:49.339384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.623 [2024-10-14 14:42:49.339486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.623 [2024-10-14 14:42:49.339495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.623 [2024-10-14 14:42:49.339500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.623 [2024-10-14 14:42:49.339504] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.623 [2024-10-14 14:42:49.339514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.623 [2024-10-14 14:42:49.349292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.623 [2024-10-14 14:42:49.349344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.623 [2024-10-14 14:42:49.349355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.623 [2024-10-14 14:42:49.349360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.623 [2024-10-14 14:42:49.349365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.623 [2024-10-14 14:42:49.349377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.623 qpair failed and we were unable to recover it. 00:29:08.886 [2024-10-14 14:42:49.359427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.886 [2024-10-14 14:42:49.359478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.886 [2024-10-14 14:42:49.359488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.886 [2024-10-14 14:42:49.359493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.886 [2024-10-14 14:42:49.359497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.886 [2024-10-14 14:42:49.359507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.886 qpair failed and we were unable to recover it. 00:29:08.886 [2024-10-14 14:42:49.369356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.886 [2024-10-14 14:42:49.369406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.886 [2024-10-14 14:42:49.369416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.886 [2024-10-14 14:42:49.369420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.886 [2024-10-14 14:42:49.369425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.886 [2024-10-14 14:42:49.369435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.886 qpair failed and we were unable to recover it. 00:29:08.886 [2024-10-14 14:42:49.379461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.886 [2024-10-14 14:42:49.379505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.886 [2024-10-14 14:42:49.379515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.886 [2024-10-14 14:42:49.379520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.886 [2024-10-14 14:42:49.379524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.886 [2024-10-14 14:42:49.379534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.886 qpair failed and we were unable to recover it. 00:29:08.886 [2024-10-14 14:42:49.389493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.886 [2024-10-14 14:42:49.389539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.886 [2024-10-14 14:42:49.389549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.886 [2024-10-14 14:42:49.389554] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.886 [2024-10-14 14:42:49.389558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.886 [2024-10-14 14:42:49.389568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.886 qpair failed and we were unable to recover it. 00:29:08.886 [2024-10-14 14:42:49.399578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.886 [2024-10-14 14:42:49.399626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.886 [2024-10-14 14:42:49.399638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.886 [2024-10-14 14:42:49.399643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.399648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.887 [2024-10-14 14:42:49.399657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.887 qpair failed and we were unable to recover it. 00:29:08.887 [2024-10-14 14:42:49.409470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.887 [2024-10-14 14:42:49.409520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.887 [2024-10-14 14:42:49.409530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.887 [2024-10-14 14:42:49.409535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.409540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.887 [2024-10-14 14:42:49.409549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.887 qpair failed and we were unable to recover it. 00:29:08.887 [2024-10-14 14:42:49.419586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.887 [2024-10-14 14:42:49.419638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.887 [2024-10-14 14:42:49.419649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.887 [2024-10-14 14:42:49.419653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.419658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.887 [2024-10-14 14:42:49.419668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.887 qpair failed and we were unable to recover it. 00:29:08.887 [2024-10-14 14:42:49.429529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.887 [2024-10-14 14:42:49.429572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.887 [2024-10-14 14:42:49.429582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.887 [2024-10-14 14:42:49.429587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.429591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.887 [2024-10-14 14:42:49.429601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.887 qpair failed and we were unable to recover it. 00:29:08.887 [2024-10-14 14:42:49.439663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.887 [2024-10-14 14:42:49.439710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.887 [2024-10-14 14:42:49.439719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.887 [2024-10-14 14:42:49.439724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.439731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.887 [2024-10-14 14:42:49.439741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.887 qpair failed and we were unable to recover it. 00:29:08.887 [2024-10-14 14:42:49.449698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.887 [2024-10-14 14:42:49.449792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.887 [2024-10-14 14:42:49.449802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.887 [2024-10-14 14:42:49.449807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.449812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.887 [2024-10-14 14:42:49.449821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.887 qpair failed and we were unable to recover it. 00:29:08.887 [2024-10-14 14:42:49.459721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.887 [2024-10-14 14:42:49.459771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.887 [2024-10-14 14:42:49.459780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.887 [2024-10-14 14:42:49.459785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.459790] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.887 [2024-10-14 14:42:49.459800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.887 qpair failed and we were unable to recover it. 00:29:08.887 [2024-10-14 14:42:49.469741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.887 [2024-10-14 14:42:49.469792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.887 [2024-10-14 14:42:49.469802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.887 [2024-10-14 14:42:49.469807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.469812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.887 [2024-10-14 14:42:49.469821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.887 qpair failed and we were unable to recover it. 00:29:08.887 [2024-10-14 14:42:49.479649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.887 [2024-10-14 14:42:49.479711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.887 [2024-10-14 14:42:49.479721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.887 [2024-10-14 14:42:49.479726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.479730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.887 [2024-10-14 14:42:49.479740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.887 qpair failed and we were unable to recover it. 00:29:08.887 [2024-10-14 14:42:49.489779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.887 [2024-10-14 14:42:49.489839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.887 [2024-10-14 14:42:49.489849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.887 [2024-10-14 14:42:49.489854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.489858] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.887 [2024-10-14 14:42:49.489868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.887 qpair failed and we were unable to recover it. 00:29:08.887 [2024-10-14 14:42:49.499870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.887 [2024-10-14 14:42:49.499915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.887 [2024-10-14 14:42:49.499925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.887 [2024-10-14 14:42:49.499930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.499934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.887 [2024-10-14 14:42:49.499944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.887 qpair failed and we were unable to recover it. 00:29:08.887 [2024-10-14 14:42:49.509901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.887 [2024-10-14 14:42:49.509952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.887 [2024-10-14 14:42:49.509962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.887 [2024-10-14 14:42:49.509967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.509971] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.887 [2024-10-14 14:42:49.509980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.887 qpair failed and we were unable to recover it. 00:29:08.887 [2024-10-14 14:42:49.519872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.887 [2024-10-14 14:42:49.519924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.887 [2024-10-14 14:42:49.519933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.887 [2024-10-14 14:42:49.519938] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.519942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.887 [2024-10-14 14:42:49.519952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.887 qpair failed and we were unable to recover it. 00:29:08.887 [2024-10-14 14:42:49.529902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.887 [2024-10-14 14:42:49.529952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.887 [2024-10-14 14:42:49.529962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.887 [2024-10-14 14:42:49.529967] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.529974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.887 [2024-10-14 14:42:49.529983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.887 qpair failed and we were unable to recover it. 00:29:08.887 [2024-10-14 14:42:49.539944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.887 [2024-10-14 14:42:49.539989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.887 [2024-10-14 14:42:49.539998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.887 [2024-10-14 14:42:49.540003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.887 [2024-10-14 14:42:49.540008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.888 [2024-10-14 14:42:49.540017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.888 qpair failed and we were unable to recover it. 00:29:08.888 [2024-10-14 14:42:49.549964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.888 [2024-10-14 14:42:49.550017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.888 [2024-10-14 14:42:49.550027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.888 [2024-10-14 14:42:49.550032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.888 [2024-10-14 14:42:49.550036] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.888 [2024-10-14 14:42:49.550045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.888 qpair failed and we were unable to recover it. 00:29:08.888 [2024-10-14 14:42:49.559867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.888 [2024-10-14 14:42:49.559916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.888 [2024-10-14 14:42:49.559927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.888 [2024-10-14 14:42:49.559932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.888 [2024-10-14 14:42:49.559936] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.888 [2024-10-14 14:42:49.559946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.888 qpair failed and we were unable to recover it. 00:29:08.888 [2024-10-14 14:42:49.569951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.888 [2024-10-14 14:42:49.570006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.888 [2024-10-14 14:42:49.570016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.888 [2024-10-14 14:42:49.570021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.888 [2024-10-14 14:42:49.570025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.888 [2024-10-14 14:42:49.570035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.888 qpair failed and we were unable to recover it. 00:29:08.888 [2024-10-14 14:42:49.580035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.888 [2024-10-14 14:42:49.580090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.888 [2024-10-14 14:42:49.580100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.888 [2024-10-14 14:42:49.580105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.888 [2024-10-14 14:42:49.580109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.888 [2024-10-14 14:42:49.580119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.888 qpair failed and we were unable to recover it. 00:29:08.888 [2024-10-14 14:42:49.590066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.888 [2024-10-14 14:42:49.590113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.888 [2024-10-14 14:42:49.590123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.888 [2024-10-14 14:42:49.590127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.888 [2024-10-14 14:42:49.590132] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.888 [2024-10-14 14:42:49.590141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.888 qpair failed and we were unable to recover it. 00:29:08.888 [2024-10-14 14:42:49.600092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.888 [2024-10-14 14:42:49.600144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.888 [2024-10-14 14:42:49.600153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.888 [2024-10-14 14:42:49.600158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.888 [2024-10-14 14:42:49.600162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.888 [2024-10-14 14:42:49.600172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.888 qpair failed and we were unable to recover it. 00:29:08.888 [2024-10-14 14:42:49.610183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:08.888 [2024-10-14 14:42:49.610234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:08.888 [2024-10-14 14:42:49.610244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:08.888 [2024-10-14 14:42:49.610248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:08.888 [2024-10-14 14:42:49.610253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:08.888 [2024-10-14 14:42:49.610262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:08.888 qpair failed and we were unable to recover it. 00:29:09.150 [2024-10-14 14:42:49.620140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.150 [2024-10-14 14:42:49.620189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.150 [2024-10-14 14:42:49.620199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.150 [2024-10-14 14:42:49.620207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.150 [2024-10-14 14:42:49.620211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.150 [2024-10-14 14:42:49.620221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-10-14 14:42:49.630159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.150 [2024-10-14 14:42:49.630215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.150 [2024-10-14 14:42:49.630246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.150 [2024-10-14 14:42:49.630252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.150 [2024-10-14 14:42:49.630256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.150 [2024-10-14 14:42:49.630274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-10-14 14:42:49.640090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.150 [2024-10-14 14:42:49.640138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.150 [2024-10-14 14:42:49.640149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.150 [2024-10-14 14:42:49.640154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.150 [2024-10-14 14:42:49.640158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.150 [2024-10-14 14:42:49.640169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.150 qpair failed and we were unable to recover it. 00:29:09.150 [2024-10-14 14:42:49.650262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.150 [2024-10-14 14:42:49.650314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.150 [2024-10-14 14:42:49.650324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.150 [2024-10-14 14:42:49.650329] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.650334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.151 [2024-10-14 14:42:49.650344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-10-14 14:42:49.660255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.151 [2024-10-14 14:42:49.660301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.151 [2024-10-14 14:42:49.660310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.151 [2024-10-14 14:42:49.660315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.660319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.151 [2024-10-14 14:42:49.660329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-10-14 14:42:49.670302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.151 [2024-10-14 14:42:49.670346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.151 [2024-10-14 14:42:49.670356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.151 [2024-10-14 14:42:49.670361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.670365] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.151 [2024-10-14 14:42:49.670375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-10-14 14:42:49.680325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.151 [2024-10-14 14:42:49.680373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.151 [2024-10-14 14:42:49.680383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.151 [2024-10-14 14:42:49.680387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.680391] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.151 [2024-10-14 14:42:49.680401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-10-14 14:42:49.690222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.151 [2024-10-14 14:42:49.690271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.151 [2024-10-14 14:42:49.690280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.151 [2024-10-14 14:42:49.690285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.690289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.151 [2024-10-14 14:42:49.690299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-10-14 14:42:49.700354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.151 [2024-10-14 14:42:49.700400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.151 [2024-10-14 14:42:49.700410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.151 [2024-10-14 14:42:49.700415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.700419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.151 [2024-10-14 14:42:49.700429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-10-14 14:42:49.710391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.151 [2024-10-14 14:42:49.710440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.151 [2024-10-14 14:42:49.710453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.151 [2024-10-14 14:42:49.710457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.710462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.151 [2024-10-14 14:42:49.710472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-10-14 14:42:49.720285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.151 [2024-10-14 14:42:49.720337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.151 [2024-10-14 14:42:49.720346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.151 [2024-10-14 14:42:49.720351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.720355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.151 [2024-10-14 14:42:49.720365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-10-14 14:42:49.730456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.151 [2024-10-14 14:42:49.730504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.151 [2024-10-14 14:42:49.730513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.151 [2024-10-14 14:42:49.730518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.730523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.151 [2024-10-14 14:42:49.730532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-10-14 14:42:49.740458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.151 [2024-10-14 14:42:49.740503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.151 [2024-10-14 14:42:49.740512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.151 [2024-10-14 14:42:49.740517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.740521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.151 [2024-10-14 14:42:49.740531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-10-14 14:42:49.750354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.151 [2024-10-14 14:42:49.750400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.151 [2024-10-14 14:42:49.750411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.151 [2024-10-14 14:42:49.750415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.750420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.151 [2024-10-14 14:42:49.750430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-10-14 14:42:49.760524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.151 [2024-10-14 14:42:49.760572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.151 [2024-10-14 14:42:49.760582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.151 [2024-10-14 14:42:49.760587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.760592] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.151 [2024-10-14 14:42:49.760601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-10-14 14:42:49.770568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.151 [2024-10-14 14:42:49.770617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.151 [2024-10-14 14:42:49.770626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.151 [2024-10-14 14:42:49.770631] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.770636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.151 [2024-10-14 14:42:49.770645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-10-14 14:42:49.780506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.151 [2024-10-14 14:42:49.780550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.151 [2024-10-14 14:42:49.780559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.151 [2024-10-14 14:42:49.780564] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.780568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.151 [2024-10-14 14:42:49.780578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.151 qpair failed and we were unable to recover it. 00:29:09.151 [2024-10-14 14:42:49.790617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.151 [2024-10-14 14:42:49.790663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.151 [2024-10-14 14:42:49.790673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.151 [2024-10-14 14:42:49.790678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.151 [2024-10-14 14:42:49.790682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.152 [2024-10-14 14:42:49.790691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-10-14 14:42:49.800615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.152 [2024-10-14 14:42:49.800668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.152 [2024-10-14 14:42:49.800680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.152 [2024-10-14 14:42:49.800685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.152 [2024-10-14 14:42:49.800689] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.152 [2024-10-14 14:42:49.800699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-10-14 14:42:49.810657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.152 [2024-10-14 14:42:49.810713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.152 [2024-10-14 14:42:49.810722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.152 [2024-10-14 14:42:49.810727] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.152 [2024-10-14 14:42:49.810731] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.152 [2024-10-14 14:42:49.810741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-10-14 14:42:49.820573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.152 [2024-10-14 14:42:49.820662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.152 [2024-10-14 14:42:49.820671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.152 [2024-10-14 14:42:49.820676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.152 [2024-10-14 14:42:49.820681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.152 [2024-10-14 14:42:49.820690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-10-14 14:42:49.830714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.152 [2024-10-14 14:42:49.830757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.152 [2024-10-14 14:42:49.830767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.152 [2024-10-14 14:42:49.830772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.152 [2024-10-14 14:42:49.830776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.152 [2024-10-14 14:42:49.830785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-10-14 14:42:49.840742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.152 [2024-10-14 14:42:49.840795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.152 [2024-10-14 14:42:49.840805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.152 [2024-10-14 14:42:49.840810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.152 [2024-10-14 14:42:49.840814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.152 [2024-10-14 14:42:49.840826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-10-14 14:42:49.850681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.152 [2024-10-14 14:42:49.850735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.152 [2024-10-14 14:42:49.850745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.152 [2024-10-14 14:42:49.850750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.152 [2024-10-14 14:42:49.850754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.152 [2024-10-14 14:42:49.850763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-10-14 14:42:49.860795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.152 [2024-10-14 14:42:49.860877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.152 [2024-10-14 14:42:49.860886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.152 [2024-10-14 14:42:49.860891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.152 [2024-10-14 14:42:49.860895] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.152 [2024-10-14 14:42:49.860905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-10-14 14:42:49.870826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.152 [2024-10-14 14:42:49.870875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.152 [2024-10-14 14:42:49.870884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.152 [2024-10-14 14:42:49.870889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.152 [2024-10-14 14:42:49.870893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.152 [2024-10-14 14:42:49.870903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.415 [2024-10-14 14:42:49.880845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.415 [2024-10-14 14:42:49.880899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.415 [2024-10-14 14:42:49.880918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.415 [2024-10-14 14:42:49.880924] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.415 [2024-10-14 14:42:49.880929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.415 [2024-10-14 14:42:49.880943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-10-14 14:42:49.890911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.415 [2024-10-14 14:42:49.890960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.415 [2024-10-14 14:42:49.890974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.415 [2024-10-14 14:42:49.890979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.415 [2024-10-14 14:42:49.890984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.415 [2024-10-14 14:42:49.890994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-10-14 14:42:49.900911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.415 [2024-10-14 14:42:49.900957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.415 [2024-10-14 14:42:49.900967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.415 [2024-10-14 14:42:49.900972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.415 [2024-10-14 14:42:49.900977] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.415 [2024-10-14 14:42:49.900987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-10-14 14:42:49.910931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.415 [2024-10-14 14:42:49.910977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.415 [2024-10-14 14:42:49.910987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.415 [2024-10-14 14:42:49.910992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.415 [2024-10-14 14:42:49.910996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.415 [2024-10-14 14:42:49.911006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-10-14 14:42:49.920958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.415 [2024-10-14 14:42:49.921006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.415 [2024-10-14 14:42:49.921016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.415 [2024-10-14 14:42:49.921021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.415 [2024-10-14 14:42:49.921025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.415 [2024-10-14 14:42:49.921035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-10-14 14:42:49.931007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.415 [2024-10-14 14:42:49.931057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.415 [2024-10-14 14:42:49.931069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.415 [2024-10-14 14:42:49.931074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.415 [2024-10-14 14:42:49.931081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.415 [2024-10-14 14:42:49.931091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.415 qpair failed and we were unable to recover it. 00:29:09.415 [2024-10-14 14:42:49.941014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.415 [2024-10-14 14:42:49.941061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.415 [2024-10-14 14:42:49.941073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:49.941078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:49.941082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.416 [2024-10-14 14:42:49.941092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-10-14 14:42:49.951096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.416 [2024-10-14 14:42:49.951164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.416 [2024-10-14 14:42:49.951174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:49.951179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:49.951184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.416 [2024-10-14 14:42:49.951194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-10-14 14:42:49.960943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.416 [2024-10-14 14:42:49.960990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.416 [2024-10-14 14:42:49.961000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:49.961006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:49.961010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.416 [2024-10-14 14:42:49.961020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-10-14 14:42:49.971170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.416 [2024-10-14 14:42:49.971220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.416 [2024-10-14 14:42:49.971230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:49.971235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:49.971240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.416 [2024-10-14 14:42:49.971250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-10-14 14:42:49.981128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.416 [2024-10-14 14:42:49.981174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.416 [2024-10-14 14:42:49.981183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:49.981188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:49.981193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.416 [2024-10-14 14:42:49.981203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-10-14 14:42:49.991037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.416 [2024-10-14 14:42:49.991102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.416 [2024-10-14 14:42:49.991112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:49.991116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:49.991121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.416 [2024-10-14 14:42:49.991131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-10-14 14:42:50.001060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.416 [2024-10-14 14:42:50.001116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.416 [2024-10-14 14:42:50.001126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:50.001130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:50.001135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.416 [2024-10-14 14:42:50.001144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-10-14 14:42:50.011208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.416 [2024-10-14 14:42:50.011261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.416 [2024-10-14 14:42:50.011274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:50.011280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:50.011284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.416 [2024-10-14 14:42:50.011295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-10-14 14:42:50.021103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.416 [2024-10-14 14:42:50.021158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.416 [2024-10-14 14:42:50.021168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:50.021173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:50.021180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.416 [2024-10-14 14:42:50.021191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-10-14 14:42:50.031265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.416 [2024-10-14 14:42:50.031319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.416 [2024-10-14 14:42:50.031329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:50.031334] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:50.031339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.416 [2024-10-14 14:42:50.031350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-10-14 14:42:50.041310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.416 [2024-10-14 14:42:50.041357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.416 [2024-10-14 14:42:50.041366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:50.041371] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:50.041376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.416 [2024-10-14 14:42:50.041386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-10-14 14:42:50.051211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.416 [2024-10-14 14:42:50.051277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.416 [2024-10-14 14:42:50.051286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:50.051292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:50.051296] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.416 [2024-10-14 14:42:50.051306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-10-14 14:42:50.061227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.416 [2024-10-14 14:42:50.061281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.416 [2024-10-14 14:42:50.061291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:50.061297] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:50.061301] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.416 [2024-10-14 14:42:50.061312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-10-14 14:42:50.071352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.416 [2024-10-14 14:42:50.071399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.416 [2024-10-14 14:42:50.071409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:50.071414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:50.071419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.416 [2024-10-14 14:42:50.071428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.416 qpair failed and we were unable to recover it. 00:29:09.416 [2024-10-14 14:42:50.081420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.416 [2024-10-14 14:42:50.081467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.416 [2024-10-14 14:42:50.081477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.416 [2024-10-14 14:42:50.081482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.416 [2024-10-14 14:42:50.081487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.417 [2024-10-14 14:42:50.081497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-10-14 14:42:50.091456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.417 [2024-10-14 14:42:50.091511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.417 [2024-10-14 14:42:50.091521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.417 [2024-10-14 14:42:50.091527] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.417 [2024-10-14 14:42:50.091531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.417 [2024-10-14 14:42:50.091542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-10-14 14:42:50.101463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.417 [2024-10-14 14:42:50.101507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.417 [2024-10-14 14:42:50.101517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.417 [2024-10-14 14:42:50.101522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.417 [2024-10-14 14:42:50.101526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.417 [2024-10-14 14:42:50.101536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-10-14 14:42:50.111366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.417 [2024-10-14 14:42:50.111412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.417 [2024-10-14 14:42:50.111423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.417 [2024-10-14 14:42:50.111430] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.417 [2024-10-14 14:42:50.111434] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.417 [2024-10-14 14:42:50.111445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-10-14 14:42:50.121530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.417 [2024-10-14 14:42:50.121580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.417 [2024-10-14 14:42:50.121590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.417 [2024-10-14 14:42:50.121595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.417 [2024-10-14 14:42:50.121599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.417 [2024-10-14 14:42:50.121609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-10-14 14:42:50.131518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.417 [2024-10-14 14:42:50.131608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.417 [2024-10-14 14:42:50.131618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.417 [2024-10-14 14:42:50.131623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.417 [2024-10-14 14:42:50.131627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.417 [2024-10-14 14:42:50.131637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.417 [2024-10-14 14:42:50.141539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.417 [2024-10-14 14:42:50.141587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.417 [2024-10-14 14:42:50.141597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.417 [2024-10-14 14:42:50.141602] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.417 [2024-10-14 14:42:50.141606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.417 [2024-10-14 14:42:50.141616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.417 qpair failed and we were unable to recover it. 00:29:09.679 [2024-10-14 14:42:50.151539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.679 [2024-10-14 14:42:50.151611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.679 [2024-10-14 14:42:50.151621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.679 [2024-10-14 14:42:50.151626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.679 [2024-10-14 14:42:50.151631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.679 [2024-10-14 14:42:50.151640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.679 qpair failed and we were unable to recover it. 00:29:09.679 [2024-10-14 14:42:50.161639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.679 [2024-10-14 14:42:50.161690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.679 [2024-10-14 14:42:50.161701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.679 [2024-10-14 14:42:50.161706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.679 [2024-10-14 14:42:50.161710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.679 [2024-10-14 14:42:50.161720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-10-14 14:42:50.171641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.680 [2024-10-14 14:42:50.171684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.680 [2024-10-14 14:42:50.171694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.680 [2024-10-14 14:42:50.171699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.680 [2024-10-14 14:42:50.171703] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.680 [2024-10-14 14:42:50.171713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-10-14 14:42:50.181644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.680 [2024-10-14 14:42:50.181723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.680 [2024-10-14 14:42:50.181733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.680 [2024-10-14 14:42:50.181738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.680 [2024-10-14 14:42:50.181742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.680 [2024-10-14 14:42:50.181753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-10-14 14:42:50.191663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.680 [2024-10-14 14:42:50.191705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.680 [2024-10-14 14:42:50.191714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.680 [2024-10-14 14:42:50.191720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.680 [2024-10-14 14:42:50.191724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.680 [2024-10-14 14:42:50.191734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-10-14 14:42:50.201744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.680 [2024-10-14 14:42:50.201792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.680 [2024-10-14 14:42:50.201801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.680 [2024-10-14 14:42:50.201809] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.680 [2024-10-14 14:42:50.201814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.680 [2024-10-14 14:42:50.201823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-10-14 14:42:50.211708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.680 [2024-10-14 14:42:50.211751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.680 [2024-10-14 14:42:50.211761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.680 [2024-10-14 14:42:50.211766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.680 [2024-10-14 14:42:50.211770] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.680 [2024-10-14 14:42:50.211780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-10-14 14:42:50.221754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.680 [2024-10-14 14:42:50.221809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.680 [2024-10-14 14:42:50.221819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.680 [2024-10-14 14:42:50.221824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.680 [2024-10-14 14:42:50.221829] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.680 [2024-10-14 14:42:50.221839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-10-14 14:42:50.231758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.680 [2024-10-14 14:42:50.231798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.680 [2024-10-14 14:42:50.231808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.680 [2024-10-14 14:42:50.231813] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.680 [2024-10-14 14:42:50.231817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.680 [2024-10-14 14:42:50.231827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-10-14 14:42:50.241849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.680 [2024-10-14 14:42:50.241898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.680 [2024-10-14 14:42:50.241907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.680 [2024-10-14 14:42:50.241912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.680 [2024-10-14 14:42:50.241917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.680 [2024-10-14 14:42:50.241927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-10-14 14:42:50.251792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.680 [2024-10-14 14:42:50.251844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.680 [2024-10-14 14:42:50.251863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.680 [2024-10-14 14:42:50.251869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.680 [2024-10-14 14:42:50.251874] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.680 [2024-10-14 14:42:50.251888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-10-14 14:42:50.261853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.680 [2024-10-14 14:42:50.261904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.680 [2024-10-14 14:42:50.261922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.680 [2024-10-14 14:42:50.261928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.680 [2024-10-14 14:42:50.261933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.680 [2024-10-14 14:42:50.261946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-10-14 14:42:50.271888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.680 [2024-10-14 14:42:50.271931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.680 [2024-10-14 14:42:50.271943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.680 [2024-10-14 14:42:50.271948] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.680 [2024-10-14 14:42:50.271952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.680 [2024-10-14 14:42:50.271963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-10-14 14:42:50.281969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.680 [2024-10-14 14:42:50.282016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.680 [2024-10-14 14:42:50.282026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.680 [2024-10-14 14:42:50.282031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.680 [2024-10-14 14:42:50.282035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.680 [2024-10-14 14:42:50.282045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-10-14 14:42:50.291959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.680 [2024-10-14 14:42:50.292001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.680 [2024-10-14 14:42:50.292017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.680 [2024-10-14 14:42:50.292022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.680 [2024-10-14 14:42:50.292026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.680 [2024-10-14 14:42:50.292036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.680 qpair failed and we were unable to recover it. 00:29:09.680 [2024-10-14 14:42:50.301960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.680 [2024-10-14 14:42:50.302004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.680 [2024-10-14 14:42:50.302014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.680 [2024-10-14 14:42:50.302018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.680 [2024-10-14 14:42:50.302023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.681 [2024-10-14 14:42:50.302032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-10-14 14:42:50.311992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.681 [2024-10-14 14:42:50.312036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.681 [2024-10-14 14:42:50.312046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.681 [2024-10-14 14:42:50.312051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.681 [2024-10-14 14:42:50.312055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.681 [2024-10-14 14:42:50.312067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-10-14 14:42:50.322059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.681 [2024-10-14 14:42:50.322116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.681 [2024-10-14 14:42:50.322126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.681 [2024-10-14 14:42:50.322130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.681 [2024-10-14 14:42:50.322135] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.681 [2024-10-14 14:42:50.322144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-10-14 14:42:50.332094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.681 [2024-10-14 14:42:50.332140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.681 [2024-10-14 14:42:50.332149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.681 [2024-10-14 14:42:50.332154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.681 [2024-10-14 14:42:50.332158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.681 [2024-10-14 14:42:50.332171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-10-14 14:42:50.342052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.681 [2024-10-14 14:42:50.342105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.681 [2024-10-14 14:42:50.342114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.681 [2024-10-14 14:42:50.342119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.681 [2024-10-14 14:42:50.342124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.681 [2024-10-14 14:42:50.342133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-10-14 14:42:50.352098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.681 [2024-10-14 14:42:50.352142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.681 [2024-10-14 14:42:50.352152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.681 [2024-10-14 14:42:50.352157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.681 [2024-10-14 14:42:50.352162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.681 [2024-10-14 14:42:50.352171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-10-14 14:42:50.362192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.681 [2024-10-14 14:42:50.362270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.681 [2024-10-14 14:42:50.362281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.681 [2024-10-14 14:42:50.362287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.681 [2024-10-14 14:42:50.362291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.681 [2024-10-14 14:42:50.362301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-10-14 14:42:50.372019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.681 [2024-10-14 14:42:50.372071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.681 [2024-10-14 14:42:50.372081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.681 [2024-10-14 14:42:50.372086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.681 [2024-10-14 14:42:50.372090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.681 [2024-10-14 14:42:50.372100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-10-14 14:42:50.382147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.681 [2024-10-14 14:42:50.382192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.681 [2024-10-14 14:42:50.382204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.681 [2024-10-14 14:42:50.382209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.681 [2024-10-14 14:42:50.382214] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.681 [2024-10-14 14:42:50.382223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-10-14 14:42:50.392214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.681 [2024-10-14 14:42:50.392258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.681 [2024-10-14 14:42:50.392267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.681 [2024-10-14 14:42:50.392272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.681 [2024-10-14 14:42:50.392276] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.681 [2024-10-14 14:42:50.392286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.681 [2024-10-14 14:42:50.402289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.681 [2024-10-14 14:42:50.402338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.681 [2024-10-14 14:42:50.402347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.681 [2024-10-14 14:42:50.402352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.681 [2024-10-14 14:42:50.402356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.681 [2024-10-14 14:42:50.402366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.681 qpair failed and we were unable to recover it. 00:29:09.944 [2024-10-14 14:42:50.412266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.945 [2024-10-14 14:42:50.412327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.945 [2024-10-14 14:42:50.412337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.945 [2024-10-14 14:42:50.412341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.945 [2024-10-14 14:42:50.412346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.945 [2024-10-14 14:42:50.412355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.945 qpair failed and we were unable to recover it. 00:29:09.945 [2024-10-14 14:42:50.422280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.945 [2024-10-14 14:42:50.422322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.945 [2024-10-14 14:42:50.422332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.945 [2024-10-14 14:42:50.422336] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.945 [2024-10-14 14:42:50.422341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.945 [2024-10-14 14:42:50.422353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.945 qpair failed and we were unable to recover it. 00:29:09.945 [2024-10-14 14:42:50.432274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.945 [2024-10-14 14:42:50.432316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.945 [2024-10-14 14:42:50.432326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.945 [2024-10-14 14:42:50.432331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.945 [2024-10-14 14:42:50.432335] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.945 [2024-10-14 14:42:50.432345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.945 qpair failed and we were unable to recover it. 00:29:09.945 [2024-10-14 14:42:50.442370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.945 [2024-10-14 14:42:50.442421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.945 [2024-10-14 14:42:50.442430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.945 [2024-10-14 14:42:50.442435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.945 [2024-10-14 14:42:50.442439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.945 [2024-10-14 14:42:50.442449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.945 qpair failed and we were unable to recover it. 00:29:09.945 [2024-10-14 14:42:50.452246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.945 [2024-10-14 14:42:50.452292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.945 [2024-10-14 14:42:50.452301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.945 [2024-10-14 14:42:50.452306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.945 [2024-10-14 14:42:50.452310] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.945 [2024-10-14 14:42:50.452320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.945 qpair failed and we were unable to recover it. 00:29:09.945 [2024-10-14 14:42:50.462381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.945 [2024-10-14 14:42:50.462430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.945 [2024-10-14 14:42:50.462441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.945 [2024-10-14 14:42:50.462446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.945 [2024-10-14 14:42:50.462450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.945 [2024-10-14 14:42:50.462460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.945 qpair failed and we were unable to recover it. 00:29:09.945 [2024-10-14 14:42:50.472419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.945 [2024-10-14 14:42:50.472462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.945 [2024-10-14 14:42:50.472472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.945 [2024-10-14 14:42:50.472476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.945 [2024-10-14 14:42:50.472481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.945 [2024-10-14 14:42:50.472490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.945 qpair failed and we were unable to recover it. 00:29:09.945 [2024-10-14 14:42:50.482490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.945 [2024-10-14 14:42:50.482539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.945 [2024-10-14 14:42:50.482548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.945 [2024-10-14 14:42:50.482553] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.945 [2024-10-14 14:42:50.482558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.945 [2024-10-14 14:42:50.482567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.945 qpair failed and we were unable to recover it. 00:29:09.945 [2024-10-14 14:42:50.492494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.945 [2024-10-14 14:42:50.492544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.945 [2024-10-14 14:42:50.492554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.945 [2024-10-14 14:42:50.492558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.945 [2024-10-14 14:42:50.492563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.945 [2024-10-14 14:42:50.492572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.945 qpair failed and we were unable to recover it. 00:29:09.945 [2024-10-14 14:42:50.502464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.945 [2024-10-14 14:42:50.502508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.945 [2024-10-14 14:42:50.502518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.945 [2024-10-14 14:42:50.502522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.945 [2024-10-14 14:42:50.502527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.945 [2024-10-14 14:42:50.502536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.945 qpair failed and we were unable to recover it. 00:29:09.945 [2024-10-14 14:42:50.512491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.945 [2024-10-14 14:42:50.512531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.945 [2024-10-14 14:42:50.512541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.945 [2024-10-14 14:42:50.512545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.945 [2024-10-14 14:42:50.512552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.945 [2024-10-14 14:42:50.512562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.945 qpair failed and we were unable to recover it. 00:29:09.945 [2024-10-14 14:42:50.522452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.945 [2024-10-14 14:42:50.522501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.945 [2024-10-14 14:42:50.522511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.945 [2024-10-14 14:42:50.522516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.945 [2024-10-14 14:42:50.522520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.945 [2024-10-14 14:42:50.522530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.945 qpair failed and we were unable to recover it. 00:29:09.945 [2024-10-14 14:42:50.532541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.945 [2024-10-14 14:42:50.532589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.945 [2024-10-14 14:42:50.532598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.945 [2024-10-14 14:42:50.532603] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.945 [2024-10-14 14:42:50.532607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.945 [2024-10-14 14:42:50.532617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.945 qpair failed and we were unable to recover it. 00:29:09.945 [2024-10-14 14:42:50.542594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.945 [2024-10-14 14:42:50.542640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.945 [2024-10-14 14:42:50.542649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.945 [2024-10-14 14:42:50.542654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.945 [2024-10-14 14:42:50.542659] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.945 [2024-10-14 14:42:50.542668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.945 qpair failed and we were unable to recover it. 00:29:09.946 [2024-10-14 14:42:50.552687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.946 [2024-10-14 14:42:50.552734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.946 [2024-10-14 14:42:50.552743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.946 [2024-10-14 14:42:50.552748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.946 [2024-10-14 14:42:50.552753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.946 [2024-10-14 14:42:50.552762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.946 qpair failed and we were unable to recover it. 00:29:09.946 [2024-10-14 14:42:50.562699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.946 [2024-10-14 14:42:50.562754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.946 [2024-10-14 14:42:50.562764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.946 [2024-10-14 14:42:50.562769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.946 [2024-10-14 14:42:50.562773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.946 [2024-10-14 14:42:50.562783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.946 qpair failed and we were unable to recover it. 00:29:09.946 [2024-10-14 14:42:50.572616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.946 [2024-10-14 14:42:50.572663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.946 [2024-10-14 14:42:50.572673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.946 [2024-10-14 14:42:50.572678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.946 [2024-10-14 14:42:50.572682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.946 [2024-10-14 14:42:50.572692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.946 qpair failed and we were unable to recover it. 00:29:09.946 [2024-10-14 14:42:50.582667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.946 [2024-10-14 14:42:50.582707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.946 [2024-10-14 14:42:50.582717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.946 [2024-10-14 14:42:50.582722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.946 [2024-10-14 14:42:50.582726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.946 [2024-10-14 14:42:50.582736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.946 qpair failed and we were unable to recover it. 00:29:09.946 [2024-10-14 14:42:50.592745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.946 [2024-10-14 14:42:50.592790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.946 [2024-10-14 14:42:50.592799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.946 [2024-10-14 14:42:50.592804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.946 [2024-10-14 14:42:50.592808] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.946 [2024-10-14 14:42:50.592817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.946 qpair failed and we were unable to recover it. 00:29:09.946 [2024-10-14 14:42:50.602849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.946 [2024-10-14 14:42:50.602903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.946 [2024-10-14 14:42:50.602922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.946 [2024-10-14 14:42:50.602932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.946 [2024-10-14 14:42:50.602937] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.946 [2024-10-14 14:42:50.602950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.946 qpair failed and we were unable to recover it. 00:29:09.946 [2024-10-14 14:42:50.612807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.946 [2024-10-14 14:42:50.612849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.946 [2024-10-14 14:42:50.612860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.946 [2024-10-14 14:42:50.612866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.946 [2024-10-14 14:42:50.612871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.946 [2024-10-14 14:42:50.612882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.946 qpair failed and we were unable to recover it. 00:29:09.946 [2024-10-14 14:42:50.622837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.946 [2024-10-14 14:42:50.622880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.946 [2024-10-14 14:42:50.622890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.946 [2024-10-14 14:42:50.622895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.946 [2024-10-14 14:42:50.622899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.946 [2024-10-14 14:42:50.622910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.946 qpair failed and we were unable to recover it. 00:29:09.946 [2024-10-14 14:42:50.632860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.946 [2024-10-14 14:42:50.632901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.946 [2024-10-14 14:42:50.632910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.946 [2024-10-14 14:42:50.632915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.946 [2024-10-14 14:42:50.632920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.946 [2024-10-14 14:42:50.632929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.946 qpair failed and we were unable to recover it. 00:29:09.946 [2024-10-14 14:42:50.642925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.946 [2024-10-14 14:42:50.642974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.946 [2024-10-14 14:42:50.642983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.946 [2024-10-14 14:42:50.642988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.946 [2024-10-14 14:42:50.642993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.946 [2024-10-14 14:42:50.643002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.946 qpair failed and we were unable to recover it. 00:29:09.946 [2024-10-14 14:42:50.652966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.946 [2024-10-14 14:42:50.653053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.946 [2024-10-14 14:42:50.653066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.946 [2024-10-14 14:42:50.653071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.946 [2024-10-14 14:42:50.653076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.946 [2024-10-14 14:42:50.653085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.946 qpair failed and we were unable to recover it. 00:29:09.946 [2024-10-14 14:42:50.662961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.946 [2024-10-14 14:42:50.663006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.946 [2024-10-14 14:42:50.663016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.946 [2024-10-14 14:42:50.663021] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.946 [2024-10-14 14:42:50.663026] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.946 [2024-10-14 14:42:50.663036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.946 qpair failed and we were unable to recover it. 00:29:09.946 [2024-10-14 14:42:50.672965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:09.946 [2024-10-14 14:42:50.673008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:09.946 [2024-10-14 14:42:50.673018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:09.946 [2024-10-14 14:42:50.673023] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:09.946 [2024-10-14 14:42:50.673027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:09.946 [2024-10-14 14:42:50.673037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:09.946 qpair failed and we were unable to recover it. 00:29:10.209 [2024-10-14 14:42:50.683047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.209 [2024-10-14 14:42:50.683103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.209 [2024-10-14 14:42:50.683114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.209 [2024-10-14 14:42:50.683119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.210 [2024-10-14 14:42:50.683124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.210 [2024-10-14 14:42:50.683134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.210 qpair failed and we were unable to recover it. 00:29:10.210 [2024-10-14 14:42:50.693049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.210 [2024-10-14 14:42:50.693143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.210 [2024-10-14 14:42:50.693153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.210 [2024-10-14 14:42:50.693162] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.210 [2024-10-14 14:42:50.693166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.210 [2024-10-14 14:42:50.693177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.210 qpair failed and we were unable to recover it. 00:29:10.210 [2024-10-14 14:42:50.703067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.210 [2024-10-14 14:42:50.703122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.210 [2024-10-14 14:42:50.703132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.210 [2024-10-14 14:42:50.703137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.210 [2024-10-14 14:42:50.703141] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.210 [2024-10-14 14:42:50.703151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.210 qpair failed and we were unable to recover it. 00:29:10.210 [2024-10-14 14:42:50.713167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.210 [2024-10-14 14:42:50.713220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.210 [2024-10-14 14:42:50.713229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.210 [2024-10-14 14:42:50.713234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.210 [2024-10-14 14:42:50.713239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.210 [2024-10-14 14:42:50.713249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.210 qpair failed and we were unable to recover it. 00:29:10.210 [2024-10-14 14:42:50.723217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.210 [2024-10-14 14:42:50.723266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.210 [2024-10-14 14:42:50.723276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.210 [2024-10-14 14:42:50.723281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.210 [2024-10-14 14:42:50.723285] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.210 [2024-10-14 14:42:50.723295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.210 qpair failed and we were unable to recover it. 00:29:10.210 [2024-10-14 14:42:50.733192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.210 [2024-10-14 14:42:50.733239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.210 [2024-10-14 14:42:50.733249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.210 [2024-10-14 14:42:50.733253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.210 [2024-10-14 14:42:50.733258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.210 [2024-10-14 14:42:50.733268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.210 qpair failed and we were unable to recover it. 00:29:10.210 [2024-10-14 14:42:50.743180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.210 [2024-10-14 14:42:50.743227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.210 [2024-10-14 14:42:50.743237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.210 [2024-10-14 14:42:50.743242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.210 [2024-10-14 14:42:50.743246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.210 [2024-10-14 14:42:50.743256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.210 qpair failed and we were unable to recover it. 00:29:10.210 [2024-10-14 14:42:50.753194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.210 [2024-10-14 14:42:50.753241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.210 [2024-10-14 14:42:50.753250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.210 [2024-10-14 14:42:50.753255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.210 [2024-10-14 14:42:50.753259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.210 [2024-10-14 14:42:50.753269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.210 qpair failed and we were unable to recover it. 00:29:10.210 [2024-10-14 14:42:50.763286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.210 [2024-10-14 14:42:50.763334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.210 [2024-10-14 14:42:50.763344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.210 [2024-10-14 14:42:50.763349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.210 [2024-10-14 14:42:50.763354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.210 [2024-10-14 14:42:50.763363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.210 qpair failed and we were unable to recover it. 00:29:10.210 [2024-10-14 14:42:50.773251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.210 [2024-10-14 14:42:50.773296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.210 [2024-10-14 14:42:50.773306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.210 [2024-10-14 14:42:50.773311] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.210 [2024-10-14 14:42:50.773315] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.210 [2024-10-14 14:42:50.773325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.210 qpair failed and we were unable to recover it. 00:29:10.210 [2024-10-14 14:42:50.783297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.210 [2024-10-14 14:42:50.783376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.210 [2024-10-14 14:42:50.783388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.210 [2024-10-14 14:42:50.783393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.210 [2024-10-14 14:42:50.783398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.210 [2024-10-14 14:42:50.783407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.210 qpair failed and we were unable to recover it. 00:29:10.210 [2024-10-14 14:42:50.793309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.210 [2024-10-14 14:42:50.793386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.210 [2024-10-14 14:42:50.793395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.210 [2024-10-14 14:42:50.793400] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.210 [2024-10-14 14:42:50.793405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.210 [2024-10-14 14:42:50.793414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.210 qpair failed and we were unable to recover it. 00:29:10.210 [2024-10-14 14:42:50.803389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.210 [2024-10-14 14:42:50.803435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.210 [2024-10-14 14:42:50.803445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.210 [2024-10-14 14:42:50.803450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.210 [2024-10-14 14:42:50.803454] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.210 [2024-10-14 14:42:50.803464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.210 qpair failed and we were unable to recover it. 00:29:10.210 [2024-10-14 14:42:50.813402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.210 [2024-10-14 14:42:50.813447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.210 [2024-10-14 14:42:50.813456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.210 [2024-10-14 14:42:50.813461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.210 [2024-10-14 14:42:50.813465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.210 [2024-10-14 14:42:50.813475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.210 qpair failed and we were unable to recover it. 00:29:10.210 [2024-10-14 14:42:50.823269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.211 [2024-10-14 14:42:50.823312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.211 [2024-10-14 14:42:50.823321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.211 [2024-10-14 14:42:50.823326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.211 [2024-10-14 14:42:50.823330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.211 [2024-10-14 14:42:50.823343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.211 qpair failed and we were unable to recover it. 00:29:10.211 [2024-10-14 14:42:50.833432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.211 [2024-10-14 14:42:50.833477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.211 [2024-10-14 14:42:50.833487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.211 [2024-10-14 14:42:50.833492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.211 [2024-10-14 14:42:50.833496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.211 [2024-10-14 14:42:50.833505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.211 qpair failed and we were unable to recover it. 00:29:10.211 [2024-10-14 14:42:50.843517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.211 [2024-10-14 14:42:50.843563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.211 [2024-10-14 14:42:50.843573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.211 [2024-10-14 14:42:50.843578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.211 [2024-10-14 14:42:50.843582] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.211 [2024-10-14 14:42:50.843592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.211 qpair failed and we were unable to recover it. 00:29:10.211 [2024-10-14 14:42:50.853501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.211 [2024-10-14 14:42:50.853552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.211 [2024-10-14 14:42:50.853561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.211 [2024-10-14 14:42:50.853566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.211 [2024-10-14 14:42:50.853571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.211 [2024-10-14 14:42:50.853580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.211 qpair failed and we were unable to recover it. 00:29:10.211 [2024-10-14 14:42:50.863465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.211 [2024-10-14 14:42:50.863538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.211 [2024-10-14 14:42:50.863548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.211 [2024-10-14 14:42:50.863553] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.211 [2024-10-14 14:42:50.863557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.211 [2024-10-14 14:42:50.863567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.211 qpair failed and we were unable to recover it. 00:29:10.211 [2024-10-14 14:42:50.873400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.211 [2024-10-14 14:42:50.873442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.211 [2024-10-14 14:42:50.873454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.211 [2024-10-14 14:42:50.873459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.211 [2024-10-14 14:42:50.873463] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.211 [2024-10-14 14:42:50.873473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.211 qpair failed and we were unable to recover it. 00:29:10.211 [2024-10-14 14:42:50.883615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.211 [2024-10-14 14:42:50.883663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.211 [2024-10-14 14:42:50.883672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.211 [2024-10-14 14:42:50.883677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.211 [2024-10-14 14:42:50.883682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.211 [2024-10-14 14:42:50.883691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.211 qpair failed and we were unable to recover it. 00:29:10.211 [2024-10-14 14:42:50.893571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.211 [2024-10-14 14:42:50.893614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.211 [2024-10-14 14:42:50.893623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.211 [2024-10-14 14:42:50.893628] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.211 [2024-10-14 14:42:50.893633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.211 [2024-10-14 14:42:50.893642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.211 qpair failed and we were unable to recover it. 00:29:10.211 [2024-10-14 14:42:50.903595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.211 [2024-10-14 14:42:50.903642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.211 [2024-10-14 14:42:50.903651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.211 [2024-10-14 14:42:50.903656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.211 [2024-10-14 14:42:50.903660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.211 [2024-10-14 14:42:50.903670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.211 qpair failed and we were unable to recover it. 00:29:10.211 [2024-10-14 14:42:50.913650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.211 [2024-10-14 14:42:50.913694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.211 [2024-10-14 14:42:50.913703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.211 [2024-10-14 14:42:50.913708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.211 [2024-10-14 14:42:50.913713] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.211 [2024-10-14 14:42:50.913724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.211 qpair failed and we were unable to recover it. 00:29:10.211 [2024-10-14 14:42:50.923594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.211 [2024-10-14 14:42:50.923647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.211 [2024-10-14 14:42:50.923657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.211 [2024-10-14 14:42:50.923661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.211 [2024-10-14 14:42:50.923666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.211 [2024-10-14 14:42:50.923675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.211 qpair failed and we were unable to recover it. 00:29:10.211 [2024-10-14 14:42:50.933731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.211 [2024-10-14 14:42:50.933776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.211 [2024-10-14 14:42:50.933786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.211 [2024-10-14 14:42:50.933790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.211 [2024-10-14 14:42:50.933795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.211 [2024-10-14 14:42:50.933804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.211 qpair failed and we were unable to recover it. 00:29:10.474 [2024-10-14 14:42:50.943753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.474 [2024-10-14 14:42:50.943803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.474 [2024-10-14 14:42:50.943822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.474 [2024-10-14 14:42:50.943828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.474 [2024-10-14 14:42:50.943833] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.474 [2024-10-14 14:42:50.943846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.474 qpair failed and we were unable to recover it. 00:29:10.474 [2024-10-14 14:42:50.953775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.474 [2024-10-14 14:42:50.953821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.474 [2024-10-14 14:42:50.953832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.474 [2024-10-14 14:42:50.953837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.474 [2024-10-14 14:42:50.953841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.474 [2024-10-14 14:42:50.953852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.474 qpair failed and we were unable to recover it. 00:29:10.474 [2024-10-14 14:42:50.963834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.474 [2024-10-14 14:42:50.963884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.474 [2024-10-14 14:42:50.963898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.474 [2024-10-14 14:42:50.963903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.474 [2024-10-14 14:42:50.963908] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.474 [2024-10-14 14:42:50.963918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.474 qpair failed and we were unable to recover it. 00:29:10.474 [2024-10-14 14:42:50.973809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.474 [2024-10-14 14:42:50.973873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.474 [2024-10-14 14:42:50.973891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.474 [2024-10-14 14:42:50.973897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.474 [2024-10-14 14:42:50.973902] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.474 [2024-10-14 14:42:50.973916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.474 qpair failed and we were unable to recover it. 00:29:10.474 [2024-10-14 14:42:50.983861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.474 [2024-10-14 14:42:50.983909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.474 [2024-10-14 14:42:50.983928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.474 [2024-10-14 14:42:50.983934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.474 [2024-10-14 14:42:50.983939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.474 [2024-10-14 14:42:50.983952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.474 qpair failed and we were unable to recover it. 00:29:10.474 [2024-10-14 14:42:50.993876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.474 [2024-10-14 14:42:50.993925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.474 [2024-10-14 14:42:50.993937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.474 [2024-10-14 14:42:50.993942] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.474 [2024-10-14 14:42:50.993947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.474 [2024-10-14 14:42:50.993957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.474 qpair failed and we were unable to recover it. 00:29:10.474 [2024-10-14 14:42:51.003972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.474 [2024-10-14 14:42:51.004077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.474 [2024-10-14 14:42:51.004087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.474 [2024-10-14 14:42:51.004092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.474 [2024-10-14 14:42:51.004100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.474 [2024-10-14 14:42:51.004110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.474 qpair failed and we were unable to recover it. 00:29:10.474 [2024-10-14 14:42:51.013947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.474 [2024-10-14 14:42:51.014005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.474 [2024-10-14 14:42:51.014015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.474 [2024-10-14 14:42:51.014019] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.474 [2024-10-14 14:42:51.014024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.474 [2024-10-14 14:42:51.014033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.475 qpair failed and we were unable to recover it. 00:29:10.475 [2024-10-14 14:42:51.023819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.475 [2024-10-14 14:42:51.023862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.475 [2024-10-14 14:42:51.023872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.475 [2024-10-14 14:42:51.023877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.475 [2024-10-14 14:42:51.023881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.475 [2024-10-14 14:42:51.023891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.475 qpair failed and we were unable to recover it. 00:29:10.475 [2024-10-14 14:42:51.033997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.475 [2024-10-14 14:42:51.034042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.475 [2024-10-14 14:42:51.034052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.475 [2024-10-14 14:42:51.034057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.475 [2024-10-14 14:42:51.034065] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.475 [2024-10-14 14:42:51.034076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.475 qpair failed and we were unable to recover it. 00:29:10.475 [2024-10-14 14:42:51.044069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.475 [2024-10-14 14:42:51.044136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.475 [2024-10-14 14:42:51.044145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.475 [2024-10-14 14:42:51.044150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.475 [2024-10-14 14:42:51.044154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.475 [2024-10-14 14:42:51.044164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.475 qpair failed and we were unable to recover it. 00:29:10.475 [2024-10-14 14:42:51.053920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.475 [2024-10-14 14:42:51.053968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.475 [2024-10-14 14:42:51.053977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.475 [2024-10-14 14:42:51.053982] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.475 [2024-10-14 14:42:51.053987] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.475 [2024-10-14 14:42:51.053996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.475 qpair failed and we were unable to recover it. 00:29:10.475 [2024-10-14 14:42:51.064073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.475 [2024-10-14 14:42:51.064117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.475 [2024-10-14 14:42:51.064127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.475 [2024-10-14 14:42:51.064132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.475 [2024-10-14 14:42:51.064137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.475 [2024-10-14 14:42:51.064147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.475 qpair failed and we were unable to recover it. 00:29:10.475 [2024-10-14 14:42:51.074104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.475 [2024-10-14 14:42:51.074148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.475 [2024-10-14 14:42:51.074158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.475 [2024-10-14 14:42:51.074163] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.475 [2024-10-14 14:42:51.074168] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.475 [2024-10-14 14:42:51.074178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.475 qpair failed and we were unable to recover it. 00:29:10.475 [2024-10-14 14:42:51.084138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.475 [2024-10-14 14:42:51.084189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.475 [2024-10-14 14:42:51.084199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.475 [2024-10-14 14:42:51.084204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.475 [2024-10-14 14:42:51.084208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.475 [2024-10-14 14:42:51.084218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.475 qpair failed and we were unable to recover it. 00:29:10.475 [2024-10-14 14:42:51.094155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.475 [2024-10-14 14:42:51.094206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.475 [2024-10-14 14:42:51.094215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.475 [2024-10-14 14:42:51.094220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.475 [2024-10-14 14:42:51.094230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.475 [2024-10-14 14:42:51.094240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.475 qpair failed and we were unable to recover it. 00:29:10.475 [2024-10-14 14:42:51.104176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.475 [2024-10-14 14:42:51.104221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.475 [2024-10-14 14:42:51.104232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.475 [2024-10-14 14:42:51.104236] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.475 [2024-10-14 14:42:51.104241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.475 [2024-10-14 14:42:51.104251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.475 qpair failed and we were unable to recover it. 00:29:10.475 [2024-10-14 14:42:51.114188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.475 [2024-10-14 14:42:51.114233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.475 [2024-10-14 14:42:51.114243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.475 [2024-10-14 14:42:51.114248] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.475 [2024-10-14 14:42:51.114253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.475 [2024-10-14 14:42:51.114263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.475 qpair failed and we were unable to recover it. 00:29:10.475 [2024-10-14 14:42:51.124271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.475 [2024-10-14 14:42:51.124320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.475 [2024-10-14 14:42:51.124330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.475 [2024-10-14 14:42:51.124335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.475 [2024-10-14 14:42:51.124339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.475 [2024-10-14 14:42:51.124349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.475 qpair failed and we were unable to recover it. 00:29:10.475 [2024-10-14 14:42:51.134262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.475 [2024-10-14 14:42:51.134305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.475 [2024-10-14 14:42:51.134315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.475 [2024-10-14 14:42:51.134320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.475 [2024-10-14 14:42:51.134324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.475 [2024-10-14 14:42:51.134335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.475 qpair failed and we were unable to recover it. 00:29:10.475 [2024-10-14 14:42:51.144285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.476 [2024-10-14 14:42:51.144329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.476 [2024-10-14 14:42:51.144339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.476 [2024-10-14 14:42:51.144343] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.476 [2024-10-14 14:42:51.144348] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.476 [2024-10-14 14:42:51.144358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.476 qpair failed and we were unable to recover it. 00:29:10.476 [2024-10-14 14:42:51.154320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.476 [2024-10-14 14:42:51.154362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.476 [2024-10-14 14:42:51.154372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.476 [2024-10-14 14:42:51.154377] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.476 [2024-10-14 14:42:51.154381] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.476 [2024-10-14 14:42:51.154391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.476 qpair failed and we were unable to recover it. 00:29:10.476 [2024-10-14 14:42:51.164371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.476 [2024-10-14 14:42:51.164421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.476 [2024-10-14 14:42:51.164431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.476 [2024-10-14 14:42:51.164436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.476 [2024-10-14 14:42:51.164441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.476 [2024-10-14 14:42:51.164451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.476 qpair failed and we were unable to recover it. 00:29:10.476 [2024-10-14 14:42:51.174354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.476 [2024-10-14 14:42:51.174402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.476 [2024-10-14 14:42:51.174412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.476 [2024-10-14 14:42:51.174417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.476 [2024-10-14 14:42:51.174422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.476 [2024-10-14 14:42:51.174432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.476 qpair failed and we were unable to recover it. 00:29:10.476 [2024-10-14 14:42:51.184401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.476 [2024-10-14 14:42:51.184445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.476 [2024-10-14 14:42:51.184454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.476 [2024-10-14 14:42:51.184462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.476 [2024-10-14 14:42:51.184466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.476 [2024-10-14 14:42:51.184476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.476 qpair failed and we were unable to recover it. 00:29:10.476 [2024-10-14 14:42:51.194422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.476 [2024-10-14 14:42:51.194465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.476 [2024-10-14 14:42:51.194476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.476 [2024-10-14 14:42:51.194481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.476 [2024-10-14 14:42:51.194485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.476 [2024-10-14 14:42:51.194496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.476 qpair failed and we were unable to recover it. 00:29:10.739 [2024-10-14 14:42:51.204493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.739 [2024-10-14 14:42:51.204547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.739 [2024-10-14 14:42:51.204558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.739 [2024-10-14 14:42:51.204563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.739 [2024-10-14 14:42:51.204567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.739 [2024-10-14 14:42:51.204577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.739 qpair failed and we were unable to recover it. 00:29:10.739 [2024-10-14 14:42:51.214484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.739 [2024-10-14 14:42:51.214531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.739 [2024-10-14 14:42:51.214541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.739 [2024-10-14 14:42:51.214546] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.739 [2024-10-14 14:42:51.214550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.739 [2024-10-14 14:42:51.214560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.739 qpair failed and we were unable to recover it. 00:29:10.739 [2024-10-14 14:42:51.224478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.739 [2024-10-14 14:42:51.224520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.739 [2024-10-14 14:42:51.224530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.739 [2024-10-14 14:42:51.224534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.739 [2024-10-14 14:42:51.224539] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.739 [2024-10-14 14:42:51.224548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.739 qpair failed and we were unable to recover it. 00:29:10.739 [2024-10-14 14:42:51.234526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.739 [2024-10-14 14:42:51.234609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.739 [2024-10-14 14:42:51.234619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.739 [2024-10-14 14:42:51.234623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.739 [2024-10-14 14:42:51.234628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.739 [2024-10-14 14:42:51.234638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.739 qpair failed and we were unable to recover it. 00:29:10.739 [2024-10-14 14:42:51.244595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.739 [2024-10-14 14:42:51.244643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.739 [2024-10-14 14:42:51.244653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.244657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.244662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.740 [2024-10-14 14:42:51.244672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.740 qpair failed and we were unable to recover it. 00:29:10.740 [2024-10-14 14:42:51.254603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.740 [2024-10-14 14:42:51.254683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.740 [2024-10-14 14:42:51.254692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.254697] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.254702] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.740 [2024-10-14 14:42:51.254711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.740 qpair failed and we were unable to recover it. 00:29:10.740 [2024-10-14 14:42:51.264618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.740 [2024-10-14 14:42:51.264662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.740 [2024-10-14 14:42:51.264672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.264676] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.264681] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.740 [2024-10-14 14:42:51.264691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.740 qpair failed and we were unable to recover it. 00:29:10.740 [2024-10-14 14:42:51.274593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.740 [2024-10-14 14:42:51.274636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.740 [2024-10-14 14:42:51.274649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.274654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.274658] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.740 [2024-10-14 14:42:51.274668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.740 qpair failed and we were unable to recover it. 00:29:10.740 [2024-10-14 14:42:51.284719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.740 [2024-10-14 14:42:51.284766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.740 [2024-10-14 14:42:51.284777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.284782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.284786] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.740 [2024-10-14 14:42:51.284796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.740 qpair failed and we were unable to recover it. 00:29:10.740 [2024-10-14 14:42:51.294696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.740 [2024-10-14 14:42:51.294751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.740 [2024-10-14 14:42:51.294770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.294776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.294781] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.740 [2024-10-14 14:42:51.294796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.740 qpair failed and we were unable to recover it. 00:29:10.740 [2024-10-14 14:42:51.304779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.740 [2024-10-14 14:42:51.304848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.740 [2024-10-14 14:42:51.304867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.304873] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.304878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.740 [2024-10-14 14:42:51.304891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.740 qpair failed and we were unable to recover it. 00:29:10.740 [2024-10-14 14:42:51.314723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.740 [2024-10-14 14:42:51.314772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.740 [2024-10-14 14:42:51.314791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.314797] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.314802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.740 [2024-10-14 14:42:51.314816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.740 qpair failed and we were unable to recover it. 00:29:10.740 [2024-10-14 14:42:51.324869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.740 [2024-10-14 14:42:51.324920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.740 [2024-10-14 14:42:51.324932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.324937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.324941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.740 [2024-10-14 14:42:51.324952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.740 qpair failed and we were unable to recover it. 00:29:10.740 [2024-10-14 14:42:51.334777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.740 [2024-10-14 14:42:51.334862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.740 [2024-10-14 14:42:51.334872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.334877] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.334882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.740 [2024-10-14 14:42:51.334892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.740 qpair failed and we were unable to recover it. 00:29:10.740 [2024-10-14 14:42:51.344833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.740 [2024-10-14 14:42:51.344878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.740 [2024-10-14 14:42:51.344888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.344892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.344897] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.740 [2024-10-14 14:42:51.344907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.740 qpair failed and we were unable to recover it. 00:29:10.740 [2024-10-14 14:42:51.354766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.740 [2024-10-14 14:42:51.354806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.740 [2024-10-14 14:42:51.354816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.354821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.354825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.740 [2024-10-14 14:42:51.354835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.740 qpair failed and we were unable to recover it. 00:29:10.740 [2024-10-14 14:42:51.364798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.740 [2024-10-14 14:42:51.364847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.740 [2024-10-14 14:42:51.364860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.364866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.364871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.740 [2024-10-14 14:42:51.364882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.740 qpair failed and we were unable to recover it. 00:29:10.740 [2024-10-14 14:42:51.374933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.740 [2024-10-14 14:42:51.374977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.740 [2024-10-14 14:42:51.374988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.374993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.374997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.740 [2024-10-14 14:42:51.375007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.740 qpair failed and we were unable to recover it. 00:29:10.740 [2024-10-14 14:42:51.384955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.740 [2024-10-14 14:42:51.385024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.740 [2024-10-14 14:42:51.385034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.740 [2024-10-14 14:42:51.385039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.740 [2024-10-14 14:42:51.385043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.741 [2024-10-14 14:42:51.385053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.741 qpair failed and we were unable to recover it. 00:29:10.741 [2024-10-14 14:42:51.395009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.741 [2024-10-14 14:42:51.395052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.741 [2024-10-14 14:42:51.395065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.741 [2024-10-14 14:42:51.395070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.741 [2024-10-14 14:42:51.395074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.741 [2024-10-14 14:42:51.395084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.741 qpair failed and we were unable to recover it. 00:29:10.741 [2024-10-14 14:42:51.405045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.741 [2024-10-14 14:42:51.405100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.741 [2024-10-14 14:42:51.405110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.741 [2024-10-14 14:42:51.405116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.741 [2024-10-14 14:42:51.405120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.741 [2024-10-14 14:42:51.405133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.741 qpair failed and we were unable to recover it. 00:29:10.741 [2024-10-14 14:42:51.415023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.741 [2024-10-14 14:42:51.415086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.741 [2024-10-14 14:42:51.415096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.741 [2024-10-14 14:42:51.415101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.741 [2024-10-14 14:42:51.415106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.741 [2024-10-14 14:42:51.415116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.741 qpair failed and we were unable to recover it. 00:29:10.741 [2024-10-14 14:42:51.425042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.741 [2024-10-14 14:42:51.425086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.741 [2024-10-14 14:42:51.425097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.741 [2024-10-14 14:42:51.425102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.741 [2024-10-14 14:42:51.425106] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.741 [2024-10-14 14:42:51.425116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.741 qpair failed and we were unable to recover it. 00:29:10.741 [2024-10-14 14:42:51.435035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.741 [2024-10-14 14:42:51.435080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.741 [2024-10-14 14:42:51.435090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.741 [2024-10-14 14:42:51.435095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.741 [2024-10-14 14:42:51.435099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.741 [2024-10-14 14:42:51.435109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.741 qpair failed and we were unable to recover it. 00:29:10.741 [2024-10-14 14:42:51.445169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.741 [2024-10-14 14:42:51.445250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.741 [2024-10-14 14:42:51.445260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.741 [2024-10-14 14:42:51.445265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.741 [2024-10-14 14:42:51.445269] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.741 [2024-10-14 14:42:51.445279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.741 qpair failed and we were unable to recover it. 00:29:10.741 [2024-10-14 14:42:51.455158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.741 [2024-10-14 14:42:51.455244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.741 [2024-10-14 14:42:51.455258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.741 [2024-10-14 14:42:51.455263] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.741 [2024-10-14 14:42:51.455267] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.741 [2024-10-14 14:42:51.455277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.741 qpair failed and we were unable to recover it. 00:29:10.741 [2024-10-14 14:42:51.465060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:10.741 [2024-10-14 14:42:51.465107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:10.741 [2024-10-14 14:42:51.465118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:10.741 [2024-10-14 14:42:51.465123] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:10.741 [2024-10-14 14:42:51.465127] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:10.741 [2024-10-14 14:42:51.465137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:10.741 qpair failed and we were unable to recover it. 00:29:11.004 [2024-10-14 14:42:51.475195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.004 [2024-10-14 14:42:51.475237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.004 [2024-10-14 14:42:51.475247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.004 [2024-10-14 14:42:51.475252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.004 [2024-10-14 14:42:51.475256] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.004 [2024-10-14 14:42:51.475266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.004 qpair failed and we were unable to recover it. 00:29:11.004 [2024-10-14 14:42:51.485268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.004 [2024-10-14 14:42:51.485316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.004 [2024-10-14 14:42:51.485326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.004 [2024-10-14 14:42:51.485331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.004 [2024-10-14 14:42:51.485335] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.004 [2024-10-14 14:42:51.485345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.004 qpair failed and we were unable to recover it. 00:29:11.004 [2024-10-14 14:42:51.495275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.004 [2024-10-14 14:42:51.495324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.004 [2024-10-14 14:42:51.495334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.004 [2024-10-14 14:42:51.495339] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.004 [2024-10-14 14:42:51.495346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.004 [2024-10-14 14:42:51.495356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.004 qpair failed and we were unable to recover it. 00:29:11.004 [2024-10-14 14:42:51.505281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.004 [2024-10-14 14:42:51.505320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.004 [2024-10-14 14:42:51.505329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.004 [2024-10-14 14:42:51.505334] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.004 [2024-10-14 14:42:51.505339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.004 [2024-10-14 14:42:51.505348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.004 qpair failed and we were unable to recover it. 00:29:11.004 [2024-10-14 14:42:51.515284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.004 [2024-10-14 14:42:51.515329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.004 [2024-10-14 14:42:51.515339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.004 [2024-10-14 14:42:51.515344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.004 [2024-10-14 14:42:51.515349] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.004 [2024-10-14 14:42:51.515358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.004 qpair failed and we were unable to recover it. 00:29:11.004 [2024-10-14 14:42:51.525372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.004 [2024-10-14 14:42:51.525423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.004 [2024-10-14 14:42:51.525432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.004 [2024-10-14 14:42:51.525437] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.004 [2024-10-14 14:42:51.525441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.004 [2024-10-14 14:42:51.525451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.004 qpair failed and we were unable to recover it. 00:29:11.004 [2024-10-14 14:42:51.535349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.535399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.005 [2024-10-14 14:42:51.535408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.005 [2024-10-14 14:42:51.535413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.005 [2024-10-14 14:42:51.535417] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.005 [2024-10-14 14:42:51.535427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.005 qpair failed and we were unable to recover it. 00:29:11.005 [2024-10-14 14:42:51.545375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.545424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.005 [2024-10-14 14:42:51.545433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.005 [2024-10-14 14:42:51.545438] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.005 [2024-10-14 14:42:51.545443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.005 [2024-10-14 14:42:51.545452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.005 qpair failed and we were unable to recover it. 00:29:11.005 [2024-10-14 14:42:51.555425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.555466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.005 [2024-10-14 14:42:51.555476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.005 [2024-10-14 14:42:51.555481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.005 [2024-10-14 14:42:51.555485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.005 [2024-10-14 14:42:51.555494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.005 qpair failed and we were unable to recover it. 00:29:11.005 [2024-10-14 14:42:51.565478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.565561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.005 [2024-10-14 14:42:51.565571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.005 [2024-10-14 14:42:51.565576] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.005 [2024-10-14 14:42:51.565580] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.005 [2024-10-14 14:42:51.565590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.005 qpair failed and we were unable to recover it. 00:29:11.005 [2024-10-14 14:42:51.575525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.575600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.005 [2024-10-14 14:42:51.575610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.005 [2024-10-14 14:42:51.575615] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.005 [2024-10-14 14:42:51.575619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.005 [2024-10-14 14:42:51.575629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.005 qpair failed and we were unable to recover it. 00:29:11.005 [2024-10-14 14:42:51.585357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.585399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.005 [2024-10-14 14:42:51.585409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.005 [2024-10-14 14:42:51.585414] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.005 [2024-10-14 14:42:51.585421] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.005 [2024-10-14 14:42:51.585431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.005 qpair failed and we were unable to recover it. 00:29:11.005 [2024-10-14 14:42:51.595380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.595424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.005 [2024-10-14 14:42:51.595434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.005 [2024-10-14 14:42:51.595439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.005 [2024-10-14 14:42:51.595443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.005 [2024-10-14 14:42:51.595453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.005 qpair failed and we were unable to recover it. 00:29:11.005 [2024-10-14 14:42:51.605602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.605649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.005 [2024-10-14 14:42:51.605658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.005 [2024-10-14 14:42:51.605663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.005 [2024-10-14 14:42:51.605668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.005 [2024-10-14 14:42:51.605677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.005 qpair failed and we were unable to recover it. 00:29:11.005 [2024-10-14 14:42:51.615601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.615649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.005 [2024-10-14 14:42:51.615658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.005 [2024-10-14 14:42:51.615663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.005 [2024-10-14 14:42:51.615668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.005 [2024-10-14 14:42:51.615678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.005 qpair failed and we were unable to recover it. 00:29:11.005 [2024-10-14 14:42:51.625603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.625645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.005 [2024-10-14 14:42:51.625655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.005 [2024-10-14 14:42:51.625660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.005 [2024-10-14 14:42:51.625664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.005 [2024-10-14 14:42:51.625674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.005 qpair failed and we were unable to recover it. 00:29:11.005 [2024-10-14 14:42:51.635620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.635675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.005 [2024-10-14 14:42:51.635685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.005 [2024-10-14 14:42:51.635690] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.005 [2024-10-14 14:42:51.635694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.005 [2024-10-14 14:42:51.635704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.005 qpair failed and we were unable to recover it. 00:29:11.005 [2024-10-14 14:42:51.645710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.645762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.005 [2024-10-14 14:42:51.645771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.005 [2024-10-14 14:42:51.645776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.005 [2024-10-14 14:42:51.645780] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.005 [2024-10-14 14:42:51.645790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.005 qpair failed and we were unable to recover it. 00:29:11.005 [2024-10-14 14:42:51.655572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.655621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.005 [2024-10-14 14:42:51.655631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.005 [2024-10-14 14:42:51.655636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.005 [2024-10-14 14:42:51.655640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.005 [2024-10-14 14:42:51.655649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.005 qpair failed and we were unable to recover it. 00:29:11.005 [2024-10-14 14:42:51.665721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.665766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.005 [2024-10-14 14:42:51.665776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.005 [2024-10-14 14:42:51.665781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.005 [2024-10-14 14:42:51.665785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.005 [2024-10-14 14:42:51.665795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.005 qpair failed and we were unable to recover it. 00:29:11.005 [2024-10-14 14:42:51.675739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.005 [2024-10-14 14:42:51.675781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.006 [2024-10-14 14:42:51.675791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.006 [2024-10-14 14:42:51.675798] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.006 [2024-10-14 14:42:51.675803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.006 [2024-10-14 14:42:51.675812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.006 qpair failed and we were unable to recover it. 00:29:11.006 [2024-10-14 14:42:51.685812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.006 [2024-10-14 14:42:51.685906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.006 [2024-10-14 14:42:51.685922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.006 [2024-10-14 14:42:51.685927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.006 [2024-10-14 14:42:51.685932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.006 [2024-10-14 14:42:51.685945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.006 qpair failed and we were unable to recover it. 00:29:11.006 [2024-10-14 14:42:51.695825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.006 [2024-10-14 14:42:51.695869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.006 [2024-10-14 14:42:51.695879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.006 [2024-10-14 14:42:51.695884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.006 [2024-10-14 14:42:51.695888] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.006 [2024-10-14 14:42:51.695898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.006 qpair failed and we were unable to recover it. 00:29:11.006 [2024-10-14 14:42:51.705849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.006 [2024-10-14 14:42:51.705939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.006 [2024-10-14 14:42:51.705949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.006 [2024-10-14 14:42:51.705954] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.006 [2024-10-14 14:42:51.705958] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.006 [2024-10-14 14:42:51.705968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.006 qpair failed and we were unable to recover it. 00:29:11.006 [2024-10-14 14:42:51.715819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.006 [2024-10-14 14:42:51.715860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.006 [2024-10-14 14:42:51.715870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.006 [2024-10-14 14:42:51.715875] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.006 [2024-10-14 14:42:51.715879] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.006 [2024-10-14 14:42:51.715889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.006 qpair failed and we were unable to recover it. 00:29:11.006 [2024-10-14 14:42:51.725925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.006 [2024-10-14 14:42:51.725976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.006 [2024-10-14 14:42:51.725985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.006 [2024-10-14 14:42:51.725990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.006 [2024-10-14 14:42:51.725994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.006 [2024-10-14 14:42:51.726004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.006 qpair failed and we were unable to recover it. 00:29:11.268 [2024-10-14 14:42:51.735929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.268 [2024-10-14 14:42:51.736008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.268 [2024-10-14 14:42:51.736017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.268 [2024-10-14 14:42:51.736022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.268 [2024-10-14 14:42:51.736027] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.268 [2024-10-14 14:42:51.736036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.268 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-14 14:42:51.745944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.269 [2024-10-14 14:42:51.745985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.269 [2024-10-14 14:42:51.745995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.269 [2024-10-14 14:42:51.746000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.269 [2024-10-14 14:42:51.746004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.269 [2024-10-14 14:42:51.746014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-14 14:42:51.755968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.269 [2024-10-14 14:42:51.756015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.269 [2024-10-14 14:42:51.756026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.269 [2024-10-14 14:42:51.756031] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.269 [2024-10-14 14:42:51.756035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.269 [2024-10-14 14:42:51.756045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-14 14:42:51.766043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.269 [2024-10-14 14:42:51.766099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.269 [2024-10-14 14:42:51.766109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.269 [2024-10-14 14:42:51.766116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.269 [2024-10-14 14:42:51.766120] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.269 [2024-10-14 14:42:51.766130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-14 14:42:51.775902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.269 [2024-10-14 14:42:51.775954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.269 [2024-10-14 14:42:51.775964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.269 [2024-10-14 14:42:51.775969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.269 [2024-10-14 14:42:51.775974] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.269 [2024-10-14 14:42:51.775983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-14 14:42:51.786049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.269 [2024-10-14 14:42:51.786140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.269 [2024-10-14 14:42:51.786150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.269 [2024-10-14 14:42:51.786155] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.269 [2024-10-14 14:42:51.786159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.269 [2024-10-14 14:42:51.786169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-14 14:42:51.795936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.269 [2024-10-14 14:42:51.795978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.269 [2024-10-14 14:42:51.795988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.269 [2024-10-14 14:42:51.795993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.269 [2024-10-14 14:42:51.795997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.269 [2024-10-14 14:42:51.796007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-14 14:42:51.806149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.269 [2024-10-14 14:42:51.806197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.269 [2024-10-14 14:42:51.806207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.269 [2024-10-14 14:42:51.806212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.269 [2024-10-14 14:42:51.806217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.269 [2024-10-14 14:42:51.806227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-14 14:42:51.816114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.269 [2024-10-14 14:42:51.816157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.269 [2024-10-14 14:42:51.816167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.269 [2024-10-14 14:42:51.816172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.269 [2024-10-14 14:42:51.816176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.269 [2024-10-14 14:42:51.816186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-14 14:42:51.826163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.269 [2024-10-14 14:42:51.826205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.269 [2024-10-14 14:42:51.826214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.269 [2024-10-14 14:42:51.826220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.269 [2024-10-14 14:42:51.826224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.269 [2024-10-14 14:42:51.826234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-14 14:42:51.836161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.269 [2024-10-14 14:42:51.836204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.269 [2024-10-14 14:42:51.836213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.269 [2024-10-14 14:42:51.836218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.269 [2024-10-14 14:42:51.836222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.269 [2024-10-14 14:42:51.836232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-14 14:42:51.846214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.269 [2024-10-14 14:42:51.846301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.269 [2024-10-14 14:42:51.846310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.269 [2024-10-14 14:42:51.846315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.269 [2024-10-14 14:42:51.846320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.269 [2024-10-14 14:42:51.846329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-14 14:42:51.856252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.269 [2024-10-14 14:42:51.856301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.269 [2024-10-14 14:42:51.856313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.269 [2024-10-14 14:42:51.856318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.269 [2024-10-14 14:42:51.856323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.269 [2024-10-14 14:42:51.856332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-14 14:42:51.866263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.269 [2024-10-14 14:42:51.866303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.269 [2024-10-14 14:42:51.866314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.269 [2024-10-14 14:42:51.866319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.269 [2024-10-14 14:42:51.866325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.269 [2024-10-14 14:42:51.866335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.269 qpair failed and we were unable to recover it. 00:29:11.269 [2024-10-14 14:42:51.876265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.269 [2024-10-14 14:42:51.876305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.269 [2024-10-14 14:42:51.876314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.269 [2024-10-14 14:42:51.876319] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.269 [2024-10-14 14:42:51.876323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.270 [2024-10-14 14:42:51.876333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-14 14:42:51.886279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.270 [2024-10-14 14:42:51.886328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.270 [2024-10-14 14:42:51.886337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.270 [2024-10-14 14:42:51.886342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.270 [2024-10-14 14:42:51.886346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.270 [2024-10-14 14:42:51.886356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-14 14:42:51.896334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.270 [2024-10-14 14:42:51.896382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.270 [2024-10-14 14:42:51.896391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.270 [2024-10-14 14:42:51.896396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.270 [2024-10-14 14:42:51.896400] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.270 [2024-10-14 14:42:51.896413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-14 14:42:51.906357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.270 [2024-10-14 14:42:51.906405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.270 [2024-10-14 14:42:51.906414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.270 [2024-10-14 14:42:51.906419] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.270 [2024-10-14 14:42:51.906424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.270 [2024-10-14 14:42:51.906433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-14 14:42:51.916397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.270 [2024-10-14 14:42:51.916441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.270 [2024-10-14 14:42:51.916450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.270 [2024-10-14 14:42:51.916455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.270 [2024-10-14 14:42:51.916460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.270 [2024-10-14 14:42:51.916469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-14 14:42:51.926483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.270 [2024-10-14 14:42:51.926549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.270 [2024-10-14 14:42:51.926558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.270 [2024-10-14 14:42:51.926563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.270 [2024-10-14 14:42:51.926568] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.270 [2024-10-14 14:42:51.926577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-14 14:42:51.936486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.270 [2024-10-14 14:42:51.936533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.270 [2024-10-14 14:42:51.936543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.270 [2024-10-14 14:42:51.936548] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.270 [2024-10-14 14:42:51.936552] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.270 [2024-10-14 14:42:51.936561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-14 14:42:51.946497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.270 [2024-10-14 14:42:51.946540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.270 [2024-10-14 14:42:51.946551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.270 [2024-10-14 14:42:51.946556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.270 [2024-10-14 14:42:51.946561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.270 [2024-10-14 14:42:51.946570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-14 14:42:51.956523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.270 [2024-10-14 14:42:51.956580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.270 [2024-10-14 14:42:51.956590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.270 [2024-10-14 14:42:51.956595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.270 [2024-10-14 14:42:51.956599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.270 [2024-10-14 14:42:51.956609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-14 14:42:51.966575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.270 [2024-10-14 14:42:51.966625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.270 [2024-10-14 14:42:51.966635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.270 [2024-10-14 14:42:51.966640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.270 [2024-10-14 14:42:51.966644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.270 [2024-10-14 14:42:51.966654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-14 14:42:51.976577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.270 [2024-10-14 14:42:51.976621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.270 [2024-10-14 14:42:51.976631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.270 [2024-10-14 14:42:51.976636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.270 [2024-10-14 14:42:51.976640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.270 [2024-10-14 14:42:51.976650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-14 14:42:51.986595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.270 [2024-10-14 14:42:51.986676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.270 [2024-10-14 14:42:51.986686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.270 [2024-10-14 14:42:51.986691] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.270 [2024-10-14 14:42:51.986696] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.270 [2024-10-14 14:42:51.986708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.270 [2024-10-14 14:42:51.996639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.270 [2024-10-14 14:42:51.996681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.270 [2024-10-14 14:42:51.996691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.270 [2024-10-14 14:42:51.996696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.270 [2024-10-14 14:42:51.996701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.270 [2024-10-14 14:42:51.996710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.270 qpair failed and we were unable to recover it. 00:29:11.533 [2024-10-14 14:42:52.006705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.533 [2024-10-14 14:42:52.006753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.533 [2024-10-14 14:42:52.006763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.533 [2024-10-14 14:42:52.006768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.533 [2024-10-14 14:42:52.006772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.533 [2024-10-14 14:42:52.006782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.533 qpair failed and we were unable to recover it. 00:29:11.533 [2024-10-14 14:42:52.016692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.533 [2024-10-14 14:42:52.016752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.533 [2024-10-14 14:42:52.016763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.533 [2024-10-14 14:42:52.016768] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.533 [2024-10-14 14:42:52.016773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.533 [2024-10-14 14:42:52.016783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.533 qpair failed and we were unable to recover it. 00:29:11.533 [2024-10-14 14:42:52.026596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.533 [2024-10-14 14:42:52.026640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.533 [2024-10-14 14:42:52.026651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.533 [2024-10-14 14:42:52.026656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.533 [2024-10-14 14:42:52.026660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.533 [2024-10-14 14:42:52.026671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.533 qpair failed and we were unable to recover it. 00:29:11.533 [2024-10-14 14:42:52.036750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.533 [2024-10-14 14:42:52.036798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.533 [2024-10-14 14:42:52.036808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.533 [2024-10-14 14:42:52.036813] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.533 [2024-10-14 14:42:52.036817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.533 [2024-10-14 14:42:52.036827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.533 qpair failed and we were unable to recover it. 00:29:11.533 [2024-10-14 14:42:52.046825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.533 [2024-10-14 14:42:52.046877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.533 [2024-10-14 14:42:52.046887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.533 [2024-10-14 14:42:52.046892] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.533 [2024-10-14 14:42:52.046896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.533 [2024-10-14 14:42:52.046905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.533 qpair failed and we were unable to recover it. 00:29:11.533 [2024-10-14 14:42:52.056825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.533 [2024-10-14 14:42:52.056872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.533 [2024-10-14 14:42:52.056882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.533 [2024-10-14 14:42:52.056886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.533 [2024-10-14 14:42:52.056891] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.533 [2024-10-14 14:42:52.056900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.533 qpair failed and we were unable to recover it. 00:29:11.533 [2024-10-14 14:42:52.066823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.533 [2024-10-14 14:42:52.066867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.533 [2024-10-14 14:42:52.066877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.533 [2024-10-14 14:42:52.066882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.533 [2024-10-14 14:42:52.066886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.533 [2024-10-14 14:42:52.066896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.533 qpair failed and we were unable to recover it. 00:29:11.533 [2024-10-14 14:42:52.076895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.533 [2024-10-14 14:42:52.076936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.533 [2024-10-14 14:42:52.076946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.533 [2024-10-14 14:42:52.076950] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.533 [2024-10-14 14:42:52.076958] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.533 [2024-10-14 14:42:52.076968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.533 qpair failed and we were unable to recover it. 00:29:11.533 [2024-10-14 14:42:52.086916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.533 [2024-10-14 14:42:52.086966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.533 [2024-10-14 14:42:52.086975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.533 [2024-10-14 14:42:52.086980] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.086984] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.086994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.534 [2024-10-14 14:42:52.096911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.534 [2024-10-14 14:42:52.096960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.534 [2024-10-14 14:42:52.096969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.534 [2024-10-14 14:42:52.096974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.096978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.096988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.534 [2024-10-14 14:42:52.106951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.534 [2024-10-14 14:42:52.106995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.534 [2024-10-14 14:42:52.107005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.534 [2024-10-14 14:42:52.107010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.107014] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.107023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.534 [2024-10-14 14:42:52.116968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.534 [2024-10-14 14:42:52.117012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.534 [2024-10-14 14:42:52.117021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.534 [2024-10-14 14:42:52.117026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.117031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.117040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.534 [2024-10-14 14:42:52.127090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.534 [2024-10-14 14:42:52.127146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.534 [2024-10-14 14:42:52.127156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.534 [2024-10-14 14:42:52.127161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.127165] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.127175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.534 [2024-10-14 14:42:52.137015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.534 [2024-10-14 14:42:52.137059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.534 [2024-10-14 14:42:52.137071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.534 [2024-10-14 14:42:52.137076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.137080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.137090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.534 [2024-10-14 14:42:52.147044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.534 [2024-10-14 14:42:52.147088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.534 [2024-10-14 14:42:52.147097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.534 [2024-10-14 14:42:52.147102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.147107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.147116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.534 [2024-10-14 14:42:52.157044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.534 [2024-10-14 14:42:52.157088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.534 [2024-10-14 14:42:52.157098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.534 [2024-10-14 14:42:52.157103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.157108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.157117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.534 [2024-10-14 14:42:52.167012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.534 [2024-10-14 14:42:52.167059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.534 [2024-10-14 14:42:52.167071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.534 [2024-10-14 14:42:52.167079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.167084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.167093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.534 [2024-10-14 14:42:52.177000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.534 [2024-10-14 14:42:52.177045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.534 [2024-10-14 14:42:52.177055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.534 [2024-10-14 14:42:52.177060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.177066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.177076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.534 [2024-10-14 14:42:52.187155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.534 [2024-10-14 14:42:52.187198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.534 [2024-10-14 14:42:52.187208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.534 [2024-10-14 14:42:52.187212] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.187217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.187227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.534 [2024-10-14 14:42:52.197202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.534 [2024-10-14 14:42:52.197246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.534 [2024-10-14 14:42:52.197256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.534 [2024-10-14 14:42:52.197261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.197265] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.197274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.534 [2024-10-14 14:42:52.207268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.534 [2024-10-14 14:42:52.207324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.534 [2024-10-14 14:42:52.207333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.534 [2024-10-14 14:42:52.207338] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.207342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.207351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.534 [2024-10-14 14:42:52.217273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.534 [2024-10-14 14:42:52.217318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.534 [2024-10-14 14:42:52.217327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.534 [2024-10-14 14:42:52.217332] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.217337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.217346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.534 [2024-10-14 14:42:52.227269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.534 [2024-10-14 14:42:52.227311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.534 [2024-10-14 14:42:52.227321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.534 [2024-10-14 14:42:52.227326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.534 [2024-10-14 14:42:52.227331] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.534 [2024-10-14 14:42:52.227340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.534 qpair failed and we were unable to recover it. 00:29:11.535 [2024-10-14 14:42:52.237149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.535 [2024-10-14 14:42:52.237192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.535 [2024-10-14 14:42:52.237202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.535 [2024-10-14 14:42:52.237207] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.535 [2024-10-14 14:42:52.237212] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.535 [2024-10-14 14:42:52.237222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.535 qpair failed and we were unable to recover it. 00:29:11.535 [2024-10-14 14:42:52.247346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.535 [2024-10-14 14:42:52.247394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.535 [2024-10-14 14:42:52.247404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.535 [2024-10-14 14:42:52.247409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.535 [2024-10-14 14:42:52.247413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.535 [2024-10-14 14:42:52.247423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.535 qpair failed and we were unable to recover it. 00:29:11.535 [2024-10-14 14:42:52.257358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.535 [2024-10-14 14:42:52.257404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.535 [2024-10-14 14:42:52.257415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.535 [2024-10-14 14:42:52.257422] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.535 [2024-10-14 14:42:52.257426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.535 [2024-10-14 14:42:52.257436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.535 qpair failed and we were unable to recover it. 00:29:11.797 [2024-10-14 14:42:52.267346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.797 [2024-10-14 14:42:52.267387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.797 [2024-10-14 14:42:52.267396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.797 [2024-10-14 14:42:52.267401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.797 [2024-10-14 14:42:52.267405] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.797 [2024-10-14 14:42:52.267415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-10-14 14:42:52.277263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.797 [2024-10-14 14:42:52.277303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.797 [2024-10-14 14:42:52.277313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.797 [2024-10-14 14:42:52.277318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.797 [2024-10-14 14:42:52.277322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.797 [2024-10-14 14:42:52.277331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-10-14 14:42:52.287480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.797 [2024-10-14 14:42:52.287527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.797 [2024-10-14 14:42:52.287536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.797 [2024-10-14 14:42:52.287541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.797 [2024-10-14 14:42:52.287545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.797 [2024-10-14 14:42:52.287554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-10-14 14:42:52.297435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.797 [2024-10-14 14:42:52.297503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.797 [2024-10-14 14:42:52.297512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.797 [2024-10-14 14:42:52.297517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.797 [2024-10-14 14:42:52.297521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.797 [2024-10-14 14:42:52.297531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.797 qpair failed and we were unable to recover it. 00:29:11.797 [2024-10-14 14:42:52.307454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.307543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.307553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.798 [2024-10-14 14:42:52.307558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.798 [2024-10-14 14:42:52.307562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.798 [2024-10-14 14:42:52.307571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-10-14 14:42:52.317476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.317522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.317531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.798 [2024-10-14 14:42:52.317536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.798 [2024-10-14 14:42:52.317540] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.798 [2024-10-14 14:42:52.317550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-10-14 14:42:52.327572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.327621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.327630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.798 [2024-10-14 14:42:52.327635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.798 [2024-10-14 14:42:52.327640] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.798 [2024-10-14 14:42:52.327649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-10-14 14:42:52.337569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.337619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.337629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.798 [2024-10-14 14:42:52.337634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.798 [2024-10-14 14:42:52.337638] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.798 [2024-10-14 14:42:52.337648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-10-14 14:42:52.347578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.347620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.347632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.798 [2024-10-14 14:42:52.347637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.798 [2024-10-14 14:42:52.347641] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.798 [2024-10-14 14:42:52.347651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-10-14 14:42:52.357620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.357660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.357670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.798 [2024-10-14 14:42:52.357674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.798 [2024-10-14 14:42:52.357679] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.798 [2024-10-14 14:42:52.357688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-10-14 14:42:52.367688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.367734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.367744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.798 [2024-10-14 14:42:52.367749] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.798 [2024-10-14 14:42:52.367753] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.798 [2024-10-14 14:42:52.367763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-10-14 14:42:52.377687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.377741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.377751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.798 [2024-10-14 14:42:52.377756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.798 [2024-10-14 14:42:52.377760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.798 [2024-10-14 14:42:52.377769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-10-14 14:42:52.387665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.387707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.387717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.798 [2024-10-14 14:42:52.387722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.798 [2024-10-14 14:42:52.387726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.798 [2024-10-14 14:42:52.387738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-10-14 14:42:52.397740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.397782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.397792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.798 [2024-10-14 14:42:52.397797] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.798 [2024-10-14 14:42:52.397801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.798 [2024-10-14 14:42:52.397811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-10-14 14:42:52.407775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.407842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.407851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.798 [2024-10-14 14:42:52.407856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.798 [2024-10-14 14:42:52.407860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.798 [2024-10-14 14:42:52.407870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-10-14 14:42:52.417807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.417851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.417860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.798 [2024-10-14 14:42:52.417865] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.798 [2024-10-14 14:42:52.417870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.798 [2024-10-14 14:42:52.417879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-10-14 14:42:52.427828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.427875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.427885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.798 [2024-10-14 14:42:52.427889] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.798 [2024-10-14 14:42:52.427894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.798 [2024-10-14 14:42:52.427903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-10-14 14:42:52.437851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.437898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.437920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.798 [2024-10-14 14:42:52.437927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.798 [2024-10-14 14:42:52.437931] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.798 [2024-10-14 14:42:52.437945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.798 qpair failed and we were unable to recover it. 00:29:11.798 [2024-10-14 14:42:52.447909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.798 [2024-10-14 14:42:52.447959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.798 [2024-10-14 14:42:52.447970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.799 [2024-10-14 14:42:52.447975] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.799 [2024-10-14 14:42:52.447979] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.799 [2024-10-14 14:42:52.447990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-10-14 14:42:52.457923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.799 [2024-10-14 14:42:52.457973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.799 [2024-10-14 14:42:52.457984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.799 [2024-10-14 14:42:52.457989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.799 [2024-10-14 14:42:52.457993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.799 [2024-10-14 14:42:52.458004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-10-14 14:42:52.467844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.799 [2024-10-14 14:42:52.467889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.799 [2024-10-14 14:42:52.467898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.799 [2024-10-14 14:42:52.467903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.799 [2024-10-14 14:42:52.467907] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.799 [2024-10-14 14:42:52.467917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-10-14 14:42:52.477924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.799 [2024-10-14 14:42:52.477966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.799 [2024-10-14 14:42:52.477976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.799 [2024-10-14 14:42:52.477981] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.799 [2024-10-14 14:42:52.477985] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.799 [2024-10-14 14:42:52.477998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-10-14 14:42:52.487997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.799 [2024-10-14 14:42:52.488047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.799 [2024-10-14 14:42:52.488057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.799 [2024-10-14 14:42:52.488064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.799 [2024-10-14 14:42:52.488069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.799 [2024-10-14 14:42:52.488079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-10-14 14:42:52.497990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.799 [2024-10-14 14:42:52.498036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.799 [2024-10-14 14:42:52.498045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.799 [2024-10-14 14:42:52.498050] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.799 [2024-10-14 14:42:52.498055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.799 [2024-10-14 14:42:52.498067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-10-14 14:42:52.508056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.799 [2024-10-14 14:42:52.508107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.799 [2024-10-14 14:42:52.508117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.799 [2024-10-14 14:42:52.508122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.799 [2024-10-14 14:42:52.508126] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.799 [2024-10-14 14:42:52.508136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.799 qpair failed and we were unable to recover it. 00:29:11.799 [2024-10-14 14:42:52.518067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:11.799 [2024-10-14 14:42:52.518106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:11.799 [2024-10-14 14:42:52.518116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:11.799 [2024-10-14 14:42:52.518120] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:11.799 [2024-10-14 14:42:52.518125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:11.799 [2024-10-14 14:42:52.518135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:11.799 qpair failed and we were unable to recover it. 00:29:12.062 [2024-10-14 14:42:52.528133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.062 [2024-10-14 14:42:52.528184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.062 [2024-10-14 14:42:52.528197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.062 [2024-10-14 14:42:52.528202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.062 [2024-10-14 14:42:52.528207] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.062 [2024-10-14 14:42:52.528217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.062 qpair failed and we were unable to recover it. 00:29:12.062 [2024-10-14 14:42:52.538130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.062 [2024-10-14 14:42:52.538175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.062 [2024-10-14 14:42:52.538185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.062 [2024-10-14 14:42:52.538191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.062 [2024-10-14 14:42:52.538197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.062 [2024-10-14 14:42:52.538208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.062 qpair failed and we were unable to recover it. 00:29:12.062 [2024-10-14 14:42:52.548159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.062 [2024-10-14 14:42:52.548203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.062 [2024-10-14 14:42:52.548212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.062 [2024-10-14 14:42:52.548217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.062 [2024-10-14 14:42:52.548222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.062 [2024-10-14 14:42:52.548231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.062 qpair failed and we were unable to recover it. 00:29:12.062 [2024-10-14 14:42:52.558167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.062 [2024-10-14 14:42:52.558212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.062 [2024-10-14 14:42:52.558222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.062 [2024-10-14 14:42:52.558227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.062 [2024-10-14 14:42:52.558231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.062 [2024-10-14 14:42:52.558241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.062 qpair failed and we were unable to recover it. 00:29:12.062 [2024-10-14 14:42:52.568249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.062 [2024-10-14 14:42:52.568300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.062 [2024-10-14 14:42:52.568310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.062 [2024-10-14 14:42:52.568315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.062 [2024-10-14 14:42:52.568325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.062 [2024-10-14 14:42:52.568335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.062 qpair failed and we were unable to recover it. 00:29:12.062 [2024-10-14 14:42:52.578238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.062 [2024-10-14 14:42:52.578310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.062 [2024-10-14 14:42:52.578321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.062 [2024-10-14 14:42:52.578325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.062 [2024-10-14 14:42:52.578330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.062 [2024-10-14 14:42:52.578340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.062 qpair failed and we were unable to recover it. 00:29:12.062 [2024-10-14 14:42:52.588253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.062 [2024-10-14 14:42:52.588295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.062 [2024-10-14 14:42:52.588305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.062 [2024-10-14 14:42:52.588309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.062 [2024-10-14 14:42:52.588314] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.062 [2024-10-14 14:42:52.588324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.062 qpair failed and we were unable to recover it. 00:29:12.062 [2024-10-14 14:42:52.598141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.062 [2024-10-14 14:42:52.598186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.062 [2024-10-14 14:42:52.598197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.062 [2024-10-14 14:42:52.598202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.062 [2024-10-14 14:42:52.598206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.062 [2024-10-14 14:42:52.598216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.062 qpair failed and we were unable to recover it. 00:29:12.062 [2024-10-14 14:42:52.608361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.062 [2024-10-14 14:42:52.608408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.062 [2024-10-14 14:42:52.608418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.062 [2024-10-14 14:42:52.608423] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.062 [2024-10-14 14:42:52.608427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.062 [2024-10-14 14:42:52.608437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.062 qpair failed and we were unable to recover it. 00:29:12.062 [2024-10-14 14:42:52.618213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.062 [2024-10-14 14:42:52.618263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.062 [2024-10-14 14:42:52.618274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.062 [2024-10-14 14:42:52.618281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.062 [2024-10-14 14:42:52.618288] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.062 [2024-10-14 14:42:52.618298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.062 qpair failed and we were unable to recover it. 00:29:12.062 [2024-10-14 14:42:52.628239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.628285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.063 [2024-10-14 14:42:52.628295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.063 [2024-10-14 14:42:52.628300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.063 [2024-10-14 14:42:52.628304] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.063 [2024-10-14 14:42:52.628314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.063 qpair failed and we were unable to recover it. 00:29:12.063 [2024-10-14 14:42:52.638417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.638508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.063 [2024-10-14 14:42:52.638518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.063 [2024-10-14 14:42:52.638523] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.063 [2024-10-14 14:42:52.638527] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.063 [2024-10-14 14:42:52.638537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.063 qpair failed and we were unable to recover it. 00:29:12.063 [2024-10-14 14:42:52.648468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.648515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.063 [2024-10-14 14:42:52.648525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.063 [2024-10-14 14:42:52.648530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.063 [2024-10-14 14:42:52.648534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.063 [2024-10-14 14:42:52.648544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.063 qpair failed and we were unable to recover it. 00:29:12.063 [2024-10-14 14:42:52.658454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.658524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.063 [2024-10-14 14:42:52.658534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.063 [2024-10-14 14:42:52.658539] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.063 [2024-10-14 14:42:52.658546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.063 [2024-10-14 14:42:52.658555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.063 qpair failed and we were unable to recover it. 00:29:12.063 [2024-10-14 14:42:52.668475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.668515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.063 [2024-10-14 14:42:52.668525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.063 [2024-10-14 14:42:52.668530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.063 [2024-10-14 14:42:52.668534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.063 [2024-10-14 14:42:52.668544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.063 qpair failed and we were unable to recover it. 00:29:12.063 [2024-10-14 14:42:52.678502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.678544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.063 [2024-10-14 14:42:52.678554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.063 [2024-10-14 14:42:52.678559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.063 [2024-10-14 14:42:52.678563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.063 [2024-10-14 14:42:52.678573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.063 qpair failed and we were unable to recover it. 00:29:12.063 [2024-10-14 14:42:52.688554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.688605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.063 [2024-10-14 14:42:52.688615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.063 [2024-10-14 14:42:52.688619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.063 [2024-10-14 14:42:52.688624] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.063 [2024-10-14 14:42:52.688634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.063 qpair failed and we were unable to recover it. 00:29:12.063 [2024-10-14 14:42:52.698444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.698491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.063 [2024-10-14 14:42:52.698501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.063 [2024-10-14 14:42:52.698506] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.063 [2024-10-14 14:42:52.698510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.063 [2024-10-14 14:42:52.698519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.063 qpair failed and we were unable to recover it. 00:29:12.063 [2024-10-14 14:42:52.708635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.708680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.063 [2024-10-14 14:42:52.708690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.063 [2024-10-14 14:42:52.708694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.063 [2024-10-14 14:42:52.708699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.063 [2024-10-14 14:42:52.708708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.063 qpair failed and we were unable to recover it. 00:29:12.063 [2024-10-14 14:42:52.718568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.718613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.063 [2024-10-14 14:42:52.718622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.063 [2024-10-14 14:42:52.718627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.063 [2024-10-14 14:42:52.718632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.063 [2024-10-14 14:42:52.718641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.063 qpair failed and we were unable to recover it. 00:29:12.063 [2024-10-14 14:42:52.728681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.728729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.063 [2024-10-14 14:42:52.728738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.063 [2024-10-14 14:42:52.728743] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.063 [2024-10-14 14:42:52.728747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.063 [2024-10-14 14:42:52.728757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.063 qpair failed and we were unable to recover it. 00:29:12.063 [2024-10-14 14:42:52.738673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.738715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.063 [2024-10-14 14:42:52.738725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.063 [2024-10-14 14:42:52.738729] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.063 [2024-10-14 14:42:52.738734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.063 [2024-10-14 14:42:52.738743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.063 qpair failed and we were unable to recover it. 00:29:12.063 [2024-10-14 14:42:52.748675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.748719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.063 [2024-10-14 14:42:52.748729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.063 [2024-10-14 14:42:52.748736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.063 [2024-10-14 14:42:52.748740] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.063 [2024-10-14 14:42:52.748750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.063 qpair failed and we were unable to recover it. 00:29:12.063 [2024-10-14 14:42:52.758567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.758612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.063 [2024-10-14 14:42:52.758622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.063 [2024-10-14 14:42:52.758627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.063 [2024-10-14 14:42:52.758631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.063 [2024-10-14 14:42:52.758641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.063 qpair failed and we were unable to recover it. 00:29:12.063 [2024-10-14 14:42:52.768788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.063 [2024-10-14 14:42:52.768836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.064 [2024-10-14 14:42:52.768846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.064 [2024-10-14 14:42:52.768850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.064 [2024-10-14 14:42:52.768855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.064 [2024-10-14 14:42:52.768864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.064 qpair failed and we were unable to recover it. 00:29:12.064 [2024-10-14 14:42:52.778643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.064 [2024-10-14 14:42:52.778730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.064 [2024-10-14 14:42:52.778740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.064 [2024-10-14 14:42:52.778745] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.064 [2024-10-14 14:42:52.778749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.064 [2024-10-14 14:42:52.778759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.064 qpair failed and we were unable to recover it. 00:29:12.064 [2024-10-14 14:42:52.788824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.064 [2024-10-14 14:42:52.788865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.064 [2024-10-14 14:42:52.788875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.064 [2024-10-14 14:42:52.788880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.064 [2024-10-14 14:42:52.788884] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.064 [2024-10-14 14:42:52.788893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.064 qpair failed and we were unable to recover it. 00:29:12.327 [2024-10-14 14:42:52.798805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.327 [2024-10-14 14:42:52.798894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.327 [2024-10-14 14:42:52.798913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.327 [2024-10-14 14:42:52.798919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.327 [2024-10-14 14:42:52.798924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.327 [2024-10-14 14:42:52.798938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-10-14 14:42:52.808935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.327 [2024-10-14 14:42:52.808988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.327 [2024-10-14 14:42:52.809007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.327 [2024-10-14 14:42:52.809012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.327 [2024-10-14 14:42:52.809017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.327 [2024-10-14 14:42:52.809031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-10-14 14:42:52.818859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.327 [2024-10-14 14:42:52.818905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.327 [2024-10-14 14:42:52.818916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.327 [2024-10-14 14:42:52.818921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.327 [2024-10-14 14:42:52.818926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.327 [2024-10-14 14:42:52.818936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-10-14 14:42:52.828902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.327 [2024-10-14 14:42:52.828945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.327 [2024-10-14 14:42:52.828954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.327 [2024-10-14 14:42:52.828959] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.327 [2024-10-14 14:42:52.828963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.327 [2024-10-14 14:42:52.828973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-10-14 14:42:52.838899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.327 [2024-10-14 14:42:52.838942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.327 [2024-10-14 14:42:52.838955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.327 [2024-10-14 14:42:52.838960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.327 [2024-10-14 14:42:52.838964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.327 [2024-10-14 14:42:52.838975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-10-14 14:42:52.848996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.327 [2024-10-14 14:42:52.849047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.327 [2024-10-14 14:42:52.849057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.327 [2024-10-14 14:42:52.849065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.327 [2024-10-14 14:42:52.849070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.327 [2024-10-14 14:42:52.849080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-10-14 14:42:52.858995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.327 [2024-10-14 14:42:52.859041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.327 [2024-10-14 14:42:52.859052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.327 [2024-10-14 14:42:52.859057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.327 [2024-10-14 14:42:52.859061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.327 [2024-10-14 14:42:52.859074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-10-14 14:42:52.869003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.327 [2024-10-14 14:42:52.869051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.327 [2024-10-14 14:42:52.869061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.327 [2024-10-14 14:42:52.869069] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.327 [2024-10-14 14:42:52.869074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.327 [2024-10-14 14:42:52.869084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.327 qpair failed and we were unable to recover it. 00:29:12.327 [2024-10-14 14:42:52.879040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.327 [2024-10-14 14:42:52.879088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.327 [2024-10-14 14:42:52.879105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:52.879110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:52.879114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.328 [2024-10-14 14:42:52.879124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-10-14 14:42:52.889106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.328 [2024-10-14 14:42:52.889156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.328 [2024-10-14 14:42:52.889166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:52.889171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:52.889175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.328 [2024-10-14 14:42:52.889185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-10-14 14:42:52.898970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.328 [2024-10-14 14:42:52.899016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.328 [2024-10-14 14:42:52.899025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:52.899030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:52.899034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.328 [2024-10-14 14:42:52.899045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-10-14 14:42:52.909202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.328 [2024-10-14 14:42:52.909254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.328 [2024-10-14 14:42:52.909264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:52.909268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:52.909273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.328 [2024-10-14 14:42:52.909283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-10-14 14:42:52.919141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.328 [2024-10-14 14:42:52.919186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.328 [2024-10-14 14:42:52.919196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:52.919200] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:52.919205] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.328 [2024-10-14 14:42:52.919215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-10-14 14:42:52.929272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.328 [2024-10-14 14:42:52.929320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.328 [2024-10-14 14:42:52.929332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:52.929337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:52.929341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.328 [2024-10-14 14:42:52.929351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-10-14 14:42:52.939202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.328 [2024-10-14 14:42:52.939279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.328 [2024-10-14 14:42:52.939289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:52.939294] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:52.939298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.328 [2024-10-14 14:42:52.939308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-10-14 14:42:52.949226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.328 [2024-10-14 14:42:52.949274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.328 [2024-10-14 14:42:52.949283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:52.949288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:52.949292] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.328 [2024-10-14 14:42:52.949302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-10-14 14:42:52.959293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.328 [2024-10-14 14:42:52.959364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.328 [2024-10-14 14:42:52.959374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:52.959379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:52.959384] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.328 [2024-10-14 14:42:52.959393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-10-14 14:42:52.969339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.328 [2024-10-14 14:42:52.969391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.328 [2024-10-14 14:42:52.969401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:52.969406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:52.969411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.328 [2024-10-14 14:42:52.969423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-10-14 14:42:52.979287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.328 [2024-10-14 14:42:52.979333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.328 [2024-10-14 14:42:52.979343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:52.979348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:52.979352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.328 [2024-10-14 14:42:52.979362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-10-14 14:42:52.989331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.328 [2024-10-14 14:42:52.989378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.328 [2024-10-14 14:42:52.989387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:52.989392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:52.989396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.328 [2024-10-14 14:42:52.989406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-10-14 14:42:52.999360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.328 [2024-10-14 14:42:52.999441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.328 [2024-10-14 14:42:52.999450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:52.999455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:52.999460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.328 [2024-10-14 14:42:52.999469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-10-14 14:42:53.009308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.328 [2024-10-14 14:42:53.009359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.328 [2024-10-14 14:42:53.009368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:53.009373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:53.009377] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.328 [2024-10-14 14:42:53.009387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.328 qpair failed and we were unable to recover it. 00:29:12.328 [2024-10-14 14:42:53.019397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.328 [2024-10-14 14:42:53.019443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.328 [2024-10-14 14:42:53.019455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.328 [2024-10-14 14:42:53.019460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.328 [2024-10-14 14:42:53.019464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.329 [2024-10-14 14:42:53.019474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-10-14 14:42:53.029419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.329 [2024-10-14 14:42:53.029462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.329 [2024-10-14 14:42:53.029472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.329 [2024-10-14 14:42:53.029476] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.329 [2024-10-14 14:42:53.029481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.329 [2024-10-14 14:42:53.029490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-10-14 14:42:53.039381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.329 [2024-10-14 14:42:53.039426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.329 [2024-10-14 14:42:53.039435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.329 [2024-10-14 14:42:53.039442] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.329 [2024-10-14 14:42:53.039446] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.329 [2024-10-14 14:42:53.039456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.329 [2024-10-14 14:42:53.049537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.329 [2024-10-14 14:42:53.049585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.329 [2024-10-14 14:42:53.049594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.329 [2024-10-14 14:42:53.049599] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.329 [2024-10-14 14:42:53.049603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.329 [2024-10-14 14:42:53.049613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.329 qpair failed and we were unable to recover it. 00:29:12.592 [2024-10-14 14:42:53.059506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.592 [2024-10-14 14:42:53.059559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.592 [2024-10-14 14:42:53.059569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.592 [2024-10-14 14:42:53.059574] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.592 [2024-10-14 14:42:53.059582] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.592 [2024-10-14 14:42:53.059593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-10-14 14:42:53.069505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.592 [2024-10-14 14:42:53.069548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.592 [2024-10-14 14:42:53.069557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.592 [2024-10-14 14:42:53.069562] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.592 [2024-10-14 14:42:53.069567] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.592 [2024-10-14 14:42:53.069576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-10-14 14:42:53.079435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.592 [2024-10-14 14:42:53.079477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.592 [2024-10-14 14:42:53.079487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.592 [2024-10-14 14:42:53.079492] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.592 [2024-10-14 14:42:53.079496] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.592 [2024-10-14 14:42:53.079506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-10-14 14:42:53.089652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.592 [2024-10-14 14:42:53.089718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.592 [2024-10-14 14:42:53.089728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.592 [2024-10-14 14:42:53.089732] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.592 [2024-10-14 14:42:53.089737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.592 [2024-10-14 14:42:53.089746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-10-14 14:42:53.099656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.592 [2024-10-14 14:42:53.099732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.592 [2024-10-14 14:42:53.099742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.592 [2024-10-14 14:42:53.099746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.592 [2024-10-14 14:42:53.099751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.592 [2024-10-14 14:42:53.099760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-10-14 14:42:53.109640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.592 [2024-10-14 14:42:53.109689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.592 [2024-10-14 14:42:53.109699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.592 [2024-10-14 14:42:53.109704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.592 [2024-10-14 14:42:53.109708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.592 [2024-10-14 14:42:53.109718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-10-14 14:42:53.119670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.592 [2024-10-14 14:42:53.119755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.592 [2024-10-14 14:42:53.119765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.592 [2024-10-14 14:42:53.119771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.592 [2024-10-14 14:42:53.119776] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.592 [2024-10-14 14:42:53.119786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-10-14 14:42:53.129646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.592 [2024-10-14 14:42:53.129699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.592 [2024-10-14 14:42:53.129708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.592 [2024-10-14 14:42:53.129713] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.592 [2024-10-14 14:42:53.129717] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.592 [2024-10-14 14:42:53.129727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-10-14 14:42:53.139739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.592 [2024-10-14 14:42:53.139787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.592 [2024-10-14 14:42:53.139797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.592 [2024-10-14 14:42:53.139802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.592 [2024-10-14 14:42:53.139806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.592 [2024-10-14 14:42:53.139815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-10-14 14:42:53.149753] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.592 [2024-10-14 14:42:53.149800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.592 [2024-10-14 14:42:53.149810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.592 [2024-10-14 14:42:53.149815] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.592 [2024-10-14 14:42:53.149822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.592 [2024-10-14 14:42:53.149831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-10-14 14:42:53.159785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.592 [2024-10-14 14:42:53.159826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.592 [2024-10-14 14:42:53.159836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.592 [2024-10-14 14:42:53.159841] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.592 [2024-10-14 14:42:53.159846] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.592 [2024-10-14 14:42:53.159855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.592 [2024-10-14 14:42:53.169855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.592 [2024-10-14 14:42:53.169908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.592 [2024-10-14 14:42:53.169927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.592 [2024-10-14 14:42:53.169932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.592 [2024-10-14 14:42:53.169938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.592 [2024-10-14 14:42:53.169952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.592 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.179847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.593 [2024-10-14 14:42:53.179897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.593 [2024-10-14 14:42:53.179916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.593 [2024-10-14 14:42:53.179922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.593 [2024-10-14 14:42:53.179927] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.593 [2024-10-14 14:42:53.179940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.189819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.593 [2024-10-14 14:42:53.189865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.593 [2024-10-14 14:42:53.189876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.593 [2024-10-14 14:42:53.189881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.593 [2024-10-14 14:42:53.189885] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.593 [2024-10-14 14:42:53.189896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.199892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.593 [2024-10-14 14:42:53.199949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.593 [2024-10-14 14:42:53.199960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.593 [2024-10-14 14:42:53.199965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.593 [2024-10-14 14:42:53.199969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.593 [2024-10-14 14:42:53.199979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.209974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.593 [2024-10-14 14:42:53.210059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.593 [2024-10-14 14:42:53.210072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.593 [2024-10-14 14:42:53.210077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.593 [2024-10-14 14:42:53.210081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.593 [2024-10-14 14:42:53.210092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.219957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.593 [2024-10-14 14:42:53.220000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.593 [2024-10-14 14:42:53.220010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.593 [2024-10-14 14:42:53.220015] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.593 [2024-10-14 14:42:53.220019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.593 [2024-10-14 14:42:53.220029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.229986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.593 [2024-10-14 14:42:53.230030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.593 [2024-10-14 14:42:53.230040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.593 [2024-10-14 14:42:53.230045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.593 [2024-10-14 14:42:53.230049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.593 [2024-10-14 14:42:53.230059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.239997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.593 [2024-10-14 14:42:53.240039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.593 [2024-10-14 14:42:53.240049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.593 [2024-10-14 14:42:53.240057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.593 [2024-10-14 14:42:53.240061] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.593 [2024-10-14 14:42:53.240075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.250082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.593 [2024-10-14 14:42:53.250130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.593 [2024-10-14 14:42:53.250139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.593 [2024-10-14 14:42:53.250144] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.593 [2024-10-14 14:42:53.250148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.593 [2024-10-14 14:42:53.250158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.260073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.593 [2024-10-14 14:42:53.260117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.593 [2024-10-14 14:42:53.260127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.593 [2024-10-14 14:42:53.260132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.593 [2024-10-14 14:42:53.260137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.593 [2024-10-14 14:42:53.260147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.270081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.593 [2024-10-14 14:42:53.270125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.593 [2024-10-14 14:42:53.270135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.593 [2024-10-14 14:42:53.270140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.593 [2024-10-14 14:42:53.270144] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.593 [2024-10-14 14:42:53.270154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.280114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.593 [2024-10-14 14:42:53.280158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.593 [2024-10-14 14:42:53.280167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.593 [2024-10-14 14:42:53.280172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.593 [2024-10-14 14:42:53.280176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.593 [2024-10-14 14:42:53.280186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.290040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.593 [2024-10-14 14:42:53.290099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.593 [2024-10-14 14:42:53.290109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.593 [2024-10-14 14:42:53.290114] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.593 [2024-10-14 14:42:53.290119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.593 [2024-10-14 14:42:53.290128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.300154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.593 [2024-10-14 14:42:53.300235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.593 [2024-10-14 14:42:53.300245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.593 [2024-10-14 14:42:53.300250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.593 [2024-10-14 14:42:53.300254] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.593 [2024-10-14 14:42:53.300264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.310173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.593 [2024-10-14 14:42:53.310220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.593 [2024-10-14 14:42:53.310229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.593 [2024-10-14 14:42:53.310235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.593 [2024-10-14 14:42:53.310240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.593 [2024-10-14 14:42:53.310250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.593 qpair failed and we were unable to recover it. 00:29:12.593 [2024-10-14 14:42:53.320080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.594 [2024-10-14 14:42:53.320123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.594 [2024-10-14 14:42:53.320133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.594 [2024-10-14 14:42:53.320138] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.594 [2024-10-14 14:42:53.320142] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.594 [2024-10-14 14:42:53.320152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.594 qpair failed and we were unable to recover it. 00:29:12.855 [2024-10-14 14:42:53.330287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.855 [2024-10-14 14:42:53.330339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.855 [2024-10-14 14:42:53.330349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.855 [2024-10-14 14:42:53.330360] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.855 [2024-10-14 14:42:53.330364] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.855 [2024-10-14 14:42:53.330375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-10-14 14:42:53.340271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.855 [2024-10-14 14:42:53.340318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.855 [2024-10-14 14:42:53.340329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.855 [2024-10-14 14:42:53.340335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.855 [2024-10-14 14:42:53.340339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.855 [2024-10-14 14:42:53.340349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.855 qpair failed and we were unable to recover it. 00:29:12.855 [2024-10-14 14:42:53.350302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.856 [2024-10-14 14:42:53.350340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.856 [2024-10-14 14:42:53.350350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.856 [2024-10-14 14:42:53.350355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.856 [2024-10-14 14:42:53.350360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.856 [2024-10-14 14:42:53.350369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-10-14 14:42:53.360195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.856 [2024-10-14 14:42:53.360238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.856 [2024-10-14 14:42:53.360249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.856 [2024-10-14 14:42:53.360254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.856 [2024-10-14 14:42:53.360258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.856 [2024-10-14 14:42:53.360268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-10-14 14:42:53.370409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.856 [2024-10-14 14:42:53.370486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.856 [2024-10-14 14:42:53.370495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.856 [2024-10-14 14:42:53.370500] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.856 [2024-10-14 14:42:53.370505] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.856 [2024-10-14 14:42:53.370515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-10-14 14:42:53.380302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.856 [2024-10-14 14:42:53.380360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.856 [2024-10-14 14:42:53.380370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.856 [2024-10-14 14:42:53.380375] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.856 [2024-10-14 14:42:53.380379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.856 [2024-10-14 14:42:53.380388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-10-14 14:42:53.390407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.856 [2024-10-14 14:42:53.390448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.856 [2024-10-14 14:42:53.390458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.856 [2024-10-14 14:42:53.390462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.856 [2024-10-14 14:42:53.390467] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.856 [2024-10-14 14:42:53.390476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-10-14 14:42:53.400452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.856 [2024-10-14 14:42:53.400495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.856 [2024-10-14 14:42:53.400505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.856 [2024-10-14 14:42:53.400510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.856 [2024-10-14 14:42:53.400514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.856 [2024-10-14 14:42:53.400524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-10-14 14:42:53.410477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.856 [2024-10-14 14:42:53.410532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.856 [2024-10-14 14:42:53.410542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.856 [2024-10-14 14:42:53.410547] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.856 [2024-10-14 14:42:53.410551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.856 [2024-10-14 14:42:53.410561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-10-14 14:42:53.420511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.856 [2024-10-14 14:42:53.420555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.856 [2024-10-14 14:42:53.420567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.856 [2024-10-14 14:42:53.420572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.856 [2024-10-14 14:42:53.420576] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.856 [2024-10-14 14:42:53.420586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-10-14 14:42:53.430510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.856 [2024-10-14 14:42:53.430553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.856 [2024-10-14 14:42:53.430563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.856 [2024-10-14 14:42:53.430568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.856 [2024-10-14 14:42:53.430572] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.856 [2024-10-14 14:42:53.430582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-10-14 14:42:53.440563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.856 [2024-10-14 14:42:53.440608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.856 [2024-10-14 14:42:53.440618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.856 [2024-10-14 14:42:53.440623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.856 [2024-10-14 14:42:53.440627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.856 [2024-10-14 14:42:53.440637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-10-14 14:42:53.450476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.856 [2024-10-14 14:42:53.450522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.856 [2024-10-14 14:42:53.450532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.856 [2024-10-14 14:42:53.450537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.856 [2024-10-14 14:42:53.450541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.856 [2024-10-14 14:42:53.450551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-10-14 14:42:53.460609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.856 [2024-10-14 14:42:53.460656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.856 [2024-10-14 14:42:53.460666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.856 [2024-10-14 14:42:53.460671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.856 [2024-10-14 14:42:53.460675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.856 [2024-10-14 14:42:53.460688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-10-14 14:42:53.470595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.856 [2024-10-14 14:42:53.470638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.856 [2024-10-14 14:42:53.470648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.856 [2024-10-14 14:42:53.470653] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.856 [2024-10-14 14:42:53.470657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.856 [2024-10-14 14:42:53.470667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.856 [2024-10-14 14:42:53.480647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.856 [2024-10-14 14:42:53.480720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.856 [2024-10-14 14:42:53.480730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.856 [2024-10-14 14:42:53.480735] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.856 [2024-10-14 14:42:53.480739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.856 [2024-10-14 14:42:53.480748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.856 qpair failed and we were unable to recover it. 00:29:12.857 [2024-10-14 14:42:53.490716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.857 [2024-10-14 14:42:53.490763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.857 [2024-10-14 14:42:53.490773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.857 [2024-10-14 14:42:53.490778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.857 [2024-10-14 14:42:53.490782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.857 [2024-10-14 14:42:53.490791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-10-14 14:42:53.500716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.857 [2024-10-14 14:42:53.500762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.857 [2024-10-14 14:42:53.500771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.857 [2024-10-14 14:42:53.500776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.857 [2024-10-14 14:42:53.500780] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.857 [2024-10-14 14:42:53.500790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-10-14 14:42:53.510721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.857 [2024-10-14 14:42:53.510766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.857 [2024-10-14 14:42:53.510778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.857 [2024-10-14 14:42:53.510783] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.857 [2024-10-14 14:42:53.510787] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.857 [2024-10-14 14:42:53.510797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-10-14 14:42:53.520755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.857 [2024-10-14 14:42:53.520797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.857 [2024-10-14 14:42:53.520807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.857 [2024-10-14 14:42:53.520812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.857 [2024-10-14 14:42:53.520816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.857 [2024-10-14 14:42:53.520825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-10-14 14:42:53.530816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.857 [2024-10-14 14:42:53.530894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.857 [2024-10-14 14:42:53.530903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.857 [2024-10-14 14:42:53.530908] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.857 [2024-10-14 14:42:53.530912] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.857 [2024-10-14 14:42:53.530922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-10-14 14:42:53.540785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.857 [2024-10-14 14:42:53.540829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.857 [2024-10-14 14:42:53.540839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.857 [2024-10-14 14:42:53.540844] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.857 [2024-10-14 14:42:53.540848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.857 [2024-10-14 14:42:53.540858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-10-14 14:42:53.550832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.857 [2024-10-14 14:42:53.550876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.857 [2024-10-14 14:42:53.550886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.857 [2024-10-14 14:42:53.550891] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.857 [2024-10-14 14:42:53.550898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.857 [2024-10-14 14:42:53.550908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-10-14 14:42:53.560821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.857 [2024-10-14 14:42:53.560880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.857 [2024-10-14 14:42:53.560890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.857 [2024-10-14 14:42:53.560895] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.857 [2024-10-14 14:42:53.560899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.857 [2024-10-14 14:42:53.560909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-10-14 14:42:53.570945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.857 [2024-10-14 14:42:53.571040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.857 [2024-10-14 14:42:53.571049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.857 [2024-10-14 14:42:53.571054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.857 [2024-10-14 14:42:53.571058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.857 [2024-10-14 14:42:53.571072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.857 qpair failed and we were unable to recover it. 00:29:12.857 [2024-10-14 14:42:53.580806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:12.857 [2024-10-14 14:42:53.580852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:12.857 [2024-10-14 14:42:53.580861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:12.857 [2024-10-14 14:42:53.580866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:12.857 [2024-10-14 14:42:53.580870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:12.857 [2024-10-14 14:42:53.580880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.857 qpair failed and we were unable to recover it. 00:29:13.120 [2024-10-14 14:42:53.590914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.120 [2024-10-14 14:42:53.590958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.120 [2024-10-14 14:42:53.590967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.120 [2024-10-14 14:42:53.590972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.120 [2024-10-14 14:42:53.590976] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.120 [2024-10-14 14:42:53.590986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.120 qpair failed and we were unable to recover it. 00:29:13.120 [2024-10-14 14:42:53.600930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.120 [2024-10-14 14:42:53.600973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.120 [2024-10-14 14:42:53.600983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.120 [2024-10-14 14:42:53.600988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.120 [2024-10-14 14:42:53.600993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.120 [2024-10-14 14:42:53.601002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.120 qpair failed and we were unable to recover it. 00:29:13.120 [2024-10-14 14:42:53.611030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.120 [2024-10-14 14:42:53.611085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.120 [2024-10-14 14:42:53.611095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.120 [2024-10-14 14:42:53.611100] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.120 [2024-10-14 14:42:53.611104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.120 [2024-10-14 14:42:53.611114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.120 qpair failed and we were unable to recover it. 00:29:13.120 [2024-10-14 14:42:53.621034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.120 [2024-10-14 14:42:53.621077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.120 [2024-10-14 14:42:53.621087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.120 [2024-10-14 14:42:53.621092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.120 [2024-10-14 14:42:53.621097] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.120 [2024-10-14 14:42:53.621108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.120 qpair failed and we were unable to recover it. 00:29:13.120 [2024-10-14 14:42:53.631041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.120 [2024-10-14 14:42:53.631086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.120 [2024-10-14 14:42:53.631096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.120 [2024-10-14 14:42:53.631101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.120 [2024-10-14 14:42:53.631105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.120 [2024-10-14 14:42:53.631115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.120 qpair failed and we were unable to recover it. 00:29:13.120 [2024-10-14 14:42:53.640937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.120 [2024-10-14 14:42:53.640979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.120 [2024-10-14 14:42:53.640988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.120 [2024-10-14 14:42:53.640993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.120 [2024-10-14 14:42:53.641000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.120 [2024-10-14 14:42:53.641009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.120 qpair failed and we were unable to recover it. 00:29:13.120 [2024-10-14 14:42:53.651145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.120 [2024-10-14 14:42:53.651193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.120 [2024-10-14 14:42:53.651203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.120 [2024-10-14 14:42:53.651208] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.120 [2024-10-14 14:42:53.651213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.120 [2024-10-14 14:42:53.651222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.120 qpair failed and we were unable to recover it. 00:29:13.120 [2024-10-14 14:42:53.661142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.120 [2024-10-14 14:42:53.661191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.120 [2024-10-14 14:42:53.661201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.120 [2024-10-14 14:42:53.661206] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.120 [2024-10-14 14:42:53.661210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.120 [2024-10-14 14:42:53.661220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.120 qpair failed and we were unable to recover it. 00:29:13.120 [2024-10-14 14:42:53.671027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.120 [2024-10-14 14:42:53.671078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.120 [2024-10-14 14:42:53.671090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.120 [2024-10-14 14:42:53.671095] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.120 [2024-10-14 14:42:53.671100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.120 [2024-10-14 14:42:53.671116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.120 qpair failed and we were unable to recover it. 00:29:13.120 [2024-10-14 14:42:53.681158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.120 [2024-10-14 14:42:53.681202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.120 [2024-10-14 14:42:53.681212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.120 [2024-10-14 14:42:53.681217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.120 [2024-10-14 14:42:53.681221] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.120 [2024-10-14 14:42:53.681231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.120 qpair failed and we were unable to recover it. 00:29:13.120 [2024-10-14 14:42:53.691263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.120 [2024-10-14 14:42:53.691315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.120 [2024-10-14 14:42:53.691325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.121 [2024-10-14 14:42:53.691329] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.121 [2024-10-14 14:42:53.691334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.121 [2024-10-14 14:42:53.691344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.121 qpair failed and we were unable to recover it. 00:29:13.121 [2024-10-14 14:42:53.701254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.121 [2024-10-14 14:42:53.701305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.121 [2024-10-14 14:42:53.701316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.121 [2024-10-14 14:42:53.701321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.121 [2024-10-14 14:42:53.701325] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.121 [2024-10-14 14:42:53.701336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.121 qpair failed and we were unable to recover it. 00:29:13.121 [2024-10-14 14:42:53.711144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.121 [2024-10-14 14:42:53.711213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.121 [2024-10-14 14:42:53.711223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.121 [2024-10-14 14:42:53.711228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.121 [2024-10-14 14:42:53.711233] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.121 [2024-10-14 14:42:53.711243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.121 qpair failed and we were unable to recover it. 00:29:13.121 [2024-10-14 14:42:53.721297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.121 [2024-10-14 14:42:53.721341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.121 [2024-10-14 14:42:53.721351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.121 [2024-10-14 14:42:53.721356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.121 [2024-10-14 14:42:53.721361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.121 [2024-10-14 14:42:53.721370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.121 qpair failed and we were unable to recover it. 00:29:13.121 [2024-10-14 14:42:53.731358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.121 [2024-10-14 14:42:53.731436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.121 [2024-10-14 14:42:53.731445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.121 [2024-10-14 14:42:53.731453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.121 [2024-10-14 14:42:53.731457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.121 [2024-10-14 14:42:53.731466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.121 qpair failed and we were unable to recover it. 00:29:13.121 [2024-10-14 14:42:53.741379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.121 [2024-10-14 14:42:53.741424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.121 [2024-10-14 14:42:53.741434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.121 [2024-10-14 14:42:53.741439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.121 [2024-10-14 14:42:53.741444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe63c000b90 00:29:13.121 [2024-10-14 14:42:53.741453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.121 qpair failed and we were unable to recover it. 00:29:13.121 [2024-10-14 14:42:53.751377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.121 [2024-10-14 14:42:53.751473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.121 [2024-10-14 14:42:53.751538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.121 [2024-10-14 14:42:53.751563] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.121 [2024-10-14 14:42:53.751584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe638000b90 00:29:13.121 [2024-10-14 14:42:53.751639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.121 qpair failed and we were unable to recover it. 00:29:13.121 [2024-10-14 14:42:53.761405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.121 [2024-10-14 14:42:53.761471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.121 [2024-10-14 14:42:53.761502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.121 [2024-10-14 14:42:53.761517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.121 [2024-10-14 14:42:53.761531] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe638000b90 00:29:13.121 [2024-10-14 14:42:53.761563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.121 qpair failed and we were unable to recover it. 00:29:13.121 [2024-10-14 14:42:53.761741] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:13.121 A controller has encountered a failure and is being reset. 00:29:13.121 Controller properly reset. 00:29:13.121 Initializing NVMe Controllers 00:29:13.121 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:13.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:13.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:13.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:13.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:13.121 Initialization complete. Launching workers. 00:29:13.121 Starting thread on core 1 00:29:13.121 Starting thread on core 2 00:29:13.121 Starting thread on core 3 00:29:13.121 Starting thread on core 0 00:29:13.121 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:13.121 00:29:13.121 real 0m11.365s 00:29:13.121 user 0m21.761s 00:29:13.121 sys 0m3.708s 00:29:13.121 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:13.121 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:13.121 ************************************ 00:29:13.121 END TEST nvmf_target_disconnect_tc2 00:29:13.121 ************************************ 00:29:13.121 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:13.121 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:13.395 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:13.395 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:13.395 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:13.395 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.395 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:13.395 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.395 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.395 rmmod nvme_tcp 00:29:13.395 rmmod nvme_fabrics 00:29:13.395 rmmod nvme_keyring 00:29:13.395 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.395 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:13.395 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:13.395 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 3578479 ']' 00:29:13.396 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 3578479 00:29:13.396 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3578479 ']' 00:29:13.396 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3578479 00:29:13.396 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:13.396 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:13.396 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3578479 00:29:13.396 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:13.396 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:13.396 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3578479' 00:29:13.396 killing process with pid 3578479 00:29:13.396 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3578479 00:29:13.396 14:42:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3578479 00:29:13.396 14:42:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:13.396 14:42:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:13.396 14:42:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:13.396 14:42:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:13.396 14:42:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:13.396 14:42:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:29:13.396 14:42:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:29:13.396 14:42:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:13.396 14:42:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:13.396 14:42:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.396 14:42:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.396 14:42:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.946 14:42:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:15.946 00:29:15.946 real 0m21.776s 00:29:15.946 user 0m49.399s 00:29:15.946 sys 0m9.862s 00:29:15.946 14:42:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:15.946 14:42:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:15.946 ************************************ 00:29:15.946 END TEST nvmf_target_disconnect 00:29:15.946 ************************************ 00:29:15.946 14:42:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:15.946 00:29:15.946 real 6m28.333s 00:29:15.946 user 11m13.785s 00:29:15.946 sys 2m12.343s 00:29:15.946 14:42:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:15.946 14:42:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.946 ************************************ 00:29:15.946 END TEST nvmf_host 00:29:15.946 ************************************ 00:29:15.946 14:42:56 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:15.946 14:42:56 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:15.946 14:42:56 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:15.946 14:42:56 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:15.946 14:42:56 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:15.946 14:42:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:15.946 ************************************ 00:29:15.946 START TEST nvmf_target_core_interrupt_mode 00:29:15.946 ************************************ 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:15.946 * Looking for test storage... 00:29:15.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:15.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.946 --rc genhtml_branch_coverage=1 00:29:15.946 --rc genhtml_function_coverage=1 00:29:15.946 --rc genhtml_legend=1 00:29:15.946 --rc geninfo_all_blocks=1 00:29:15.946 --rc geninfo_unexecuted_blocks=1 00:29:15.946 00:29:15.946 ' 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:15.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.946 --rc genhtml_branch_coverage=1 00:29:15.946 --rc genhtml_function_coverage=1 00:29:15.946 --rc genhtml_legend=1 00:29:15.946 --rc geninfo_all_blocks=1 00:29:15.946 --rc geninfo_unexecuted_blocks=1 00:29:15.946 00:29:15.946 ' 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:15.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.946 --rc genhtml_branch_coverage=1 00:29:15.946 --rc genhtml_function_coverage=1 00:29:15.946 --rc genhtml_legend=1 00:29:15.946 --rc geninfo_all_blocks=1 00:29:15.946 --rc geninfo_unexecuted_blocks=1 00:29:15.946 00:29:15.946 ' 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:15.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.946 --rc genhtml_branch_coverage=1 00:29:15.946 --rc genhtml_function_coverage=1 00:29:15.946 --rc genhtml_legend=1 00:29:15.946 --rc geninfo_all_blocks=1 00:29:15.946 --rc geninfo_unexecuted_blocks=1 00:29:15.946 00:29:15.946 ' 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.946 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:15.947 ************************************ 00:29:15.947 START TEST nvmf_abort 00:29:15.947 ************************************ 00:29:15.947 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:16.208 * Looking for test storage... 00:29:16.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:16.208 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:16.208 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:29:16.208 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:16.208 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:16.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.209 --rc genhtml_branch_coverage=1 00:29:16.209 --rc genhtml_function_coverage=1 00:29:16.209 --rc genhtml_legend=1 00:29:16.209 --rc geninfo_all_blocks=1 00:29:16.209 --rc geninfo_unexecuted_blocks=1 00:29:16.209 00:29:16.209 ' 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:16.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.209 --rc genhtml_branch_coverage=1 00:29:16.209 --rc genhtml_function_coverage=1 00:29:16.209 --rc genhtml_legend=1 00:29:16.209 --rc geninfo_all_blocks=1 00:29:16.209 --rc geninfo_unexecuted_blocks=1 00:29:16.209 00:29:16.209 ' 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:16.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.209 --rc genhtml_branch_coverage=1 00:29:16.209 --rc genhtml_function_coverage=1 00:29:16.209 --rc genhtml_legend=1 00:29:16.209 --rc geninfo_all_blocks=1 00:29:16.209 --rc geninfo_unexecuted_blocks=1 00:29:16.209 00:29:16.209 ' 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:16.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.209 --rc genhtml_branch_coverage=1 00:29:16.209 --rc genhtml_function_coverage=1 00:29:16.209 --rc genhtml_legend=1 00:29:16.209 --rc geninfo_all_blocks=1 00:29:16.209 --rc geninfo_unexecuted_blocks=1 00:29:16.209 00:29:16.209 ' 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:16.209 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.210 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.210 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.210 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:16.210 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:16.210 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.210 14:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.354 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:24.355 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:24.355 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:24.355 Found net devices under 0000:31:00.0: cvl_0_0 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:24.355 Found net devices under 0000:31:00.1: cvl_0_1 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.355 14:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:29:24.356 00:29:24.356 --- 10.0.0.2 ping statistics --- 00:29:24.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.356 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:29:24.356 00:29:24.356 --- 10.0.0.1 ping statistics --- 00:29:24.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.356 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=3584123 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 3584123 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3584123 ']' 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.356 14:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.356 [2024-10-14 14:43:04.360068] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:24.356 [2024-10-14 14:43:04.361523] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:29:24.356 [2024-10-14 14:43:04.361581] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.356 [2024-10-14 14:43:04.454363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:24.356 [2024-10-14 14:43:04.506076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.356 [2024-10-14 14:43:04.506124] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.356 [2024-10-14 14:43:04.506133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.356 [2024-10-14 14:43:04.506140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.356 [2024-10-14 14:43:04.506146] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.356 [2024-10-14 14:43:04.507973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.356 [2024-10-14 14:43:04.508133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.356 [2024-10-14 14:43:04.508154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.356 [2024-10-14 14:43:04.583308] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:24.356 [2024-10-14 14:43:04.583376] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:24.356 [2024-10-14 14:43:04.584077] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:24.356 [2024-10-14 14:43:04.584367] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.618 [2024-10-14 14:43:05.213167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.618 Malloc0 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.618 Delay0 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.618 [2024-10-14 14:43:05.321103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.618 14:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:24.879 [2024-10-14 14:43:05.436698] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:26.792 Initializing NVMe Controllers 00:29:26.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:26.792 controller IO queue size 128 less than required 00:29:26.792 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:26.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:26.792 Initialization complete. Launching workers. 00:29:26.793 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28973 00:29:26.793 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29030, failed to submit 66 00:29:26.793 success 28973, unsuccessful 57, failed 0 00:29:26.793 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:26.793 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.793 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.053 rmmod nvme_tcp 00:29:27.053 rmmod nvme_fabrics 00:29:27.053 rmmod nvme_keyring 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 3584123 ']' 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 3584123 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3584123 ']' 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3584123 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3584123 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3584123' 00:29:27.053 killing process with pid 3584123 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3584123 00:29:27.053 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3584123 00:29:27.315 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:27.315 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:27.315 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:27.315 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:27.315 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:29:27.315 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:27.315 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:29:27.315 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:27.315 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:27.315 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.315 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.315 14:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.228 14:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.228 00:29:29.228 real 0m13.340s 00:29:29.228 user 0m10.926s 00:29:29.228 sys 0m6.859s 00:29:29.228 14:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:29.228 14:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:29.228 ************************************ 00:29:29.228 END TEST nvmf_abort 00:29:29.228 ************************************ 00:29:29.490 14:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:29.490 14:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:29.490 14:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:29.490 14:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:29.490 ************************************ 00:29:29.490 START TEST nvmf_ns_hotplug_stress 00:29:29.490 ************************************ 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:29.490 * Looking for test storage... 00:29:29.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:29.490 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:29.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.491 --rc genhtml_branch_coverage=1 00:29:29.491 --rc genhtml_function_coverage=1 00:29:29.491 --rc genhtml_legend=1 00:29:29.491 --rc geninfo_all_blocks=1 00:29:29.491 --rc geninfo_unexecuted_blocks=1 00:29:29.491 00:29:29.491 ' 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:29.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.491 --rc genhtml_branch_coverage=1 00:29:29.491 --rc genhtml_function_coverage=1 00:29:29.491 --rc genhtml_legend=1 00:29:29.491 --rc geninfo_all_blocks=1 00:29:29.491 --rc geninfo_unexecuted_blocks=1 00:29:29.491 00:29:29.491 ' 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:29.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.491 --rc genhtml_branch_coverage=1 00:29:29.491 --rc genhtml_function_coverage=1 00:29:29.491 --rc genhtml_legend=1 00:29:29.491 --rc geninfo_all_blocks=1 00:29:29.491 --rc geninfo_unexecuted_blocks=1 00:29:29.491 00:29:29.491 ' 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:29.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.491 --rc genhtml_branch_coverage=1 00:29:29.491 --rc genhtml_function_coverage=1 00:29:29.491 --rc genhtml_legend=1 00:29:29.491 --rc geninfo_all_blocks=1 00:29:29.491 --rc geninfo_unexecuted_blocks=1 00:29:29.491 00:29:29.491 ' 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.491 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.753 14:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:37.901 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:37.901 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.901 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:37.902 Found net devices under 0000:31:00.0: cvl_0_0 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:37.902 Found net devices under 0000:31:00.1: cvl_0_1 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:37.902 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.902 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:29:37.902 00:29:37.902 --- 10.0.0.2 ping statistics --- 00:29:37.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.902 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.902 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.902 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:29:37.902 00:29:37.902 --- 10.0.0.1 ping statistics --- 00:29:37.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.902 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=3588960 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 3588960 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3588960 ']' 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:37.902 14:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:37.902 [2024-10-14 14:43:17.892954] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:37.902 [2024-10-14 14:43:17.894070] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:29:37.902 [2024-10-14 14:43:17.894120] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.902 [2024-10-14 14:43:17.985795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:37.902 [2024-10-14 14:43:18.037652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.902 [2024-10-14 14:43:18.037700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.902 [2024-10-14 14:43:18.037708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.902 [2024-10-14 14:43:18.037715] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.902 [2024-10-14 14:43:18.037722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.902 [2024-10-14 14:43:18.039584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.902 [2024-10-14 14:43:18.039753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.902 [2024-10-14 14:43:18.039753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.902 [2024-10-14 14:43:18.115448] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:37.902 [2024-10-14 14:43:18.115501] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:37.903 [2024-10-14 14:43:18.116149] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:37.903 [2024-10-14 14:43:18.116445] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:38.164 14:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:38.164 14:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:29:38.164 14:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:38.164 14:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:38.164 14:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:38.164 14:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.164 14:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:38.164 14:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:38.425 [2024-10-14 14:43:18.916643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.425 14:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:38.425 14:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.686 [2024-10-14 14:43:19.269220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.686 14:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:38.947 14:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:38.947 Malloc0 00:29:39.209 14:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:39.209 Delay0 00:29:39.209 14:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:39.471 14:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:39.732 NULL1 00:29:39.732 14:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:39.732 14:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3589565 00:29:39.732 14:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:39.732 14:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:39.732 14:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.992 14:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:40.253 14:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:40.253 14:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:40.253 true 00:29:40.253 14:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:40.253 14:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.513 14:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:40.774 14:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:40.774 14:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:40.774 true 00:29:41.035 14:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:41.035 14:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.035 14:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:41.296 14:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:41.296 14:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:41.557 true 00:29:41.557 14:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:41.557 14:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.557 14:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:41.817 14:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:41.817 14:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:42.078 true 00:29:42.078 14:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:42.078 14:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.340 14:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:42.340 14:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:42.340 14:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:42.602 true 00:29:42.602 14:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:42.602 14:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.863 14:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.124 14:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:43.124 14:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:43.124 true 00:29:43.124 14:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:43.124 14:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.385 14:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.645 14:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:43.645 14:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:43.645 true 00:29:43.645 14:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:43.645 14:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.905 14:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:44.164 14:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:44.164 14:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:44.164 true 00:29:44.425 14:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:44.425 14:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:44.425 14:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:44.686 14:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:44.686 14:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:44.947 true 00:29:44.947 14:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:44.947 14:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:44.947 14:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.208 14:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:45.208 14:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:45.469 true 00:29:45.469 14:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:45.469 14:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.730 14:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.730 14:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:45.730 14:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:45.991 true 00:29:45.991 14:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:45.991 14:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.251 14:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.251 14:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:46.251 14:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:46.512 true 00:29:46.512 14:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:46.512 14:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.772 14:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.772 14:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:46.772 14:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:47.032 true 00:29:47.032 14:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:47.032 14:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.293 14:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.552 14:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:47.552 14:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:47.552 true 00:29:47.552 14:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:47.552 14:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.812 14:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.072 14:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:48.072 14:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:48.072 true 00:29:48.072 14:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:48.072 14:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.332 14:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.592 14:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:48.592 14:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:48.592 true 00:29:48.852 14:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:48.852 14:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.852 14:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.112 14:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:49.112 14:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:49.374 true 00:29:49.374 14:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:49.374 14:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.374 14:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.634 14:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:49.634 14:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:49.896 true 00:29:49.896 14:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:49.896 14:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.158 14:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.158 14:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:50.158 14:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:50.418 true 00:29:50.418 14:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:50.418 14:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.680 14:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.680 14:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:50.680 14:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:50.941 true 00:29:50.941 14:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:50.941 14:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.226 14:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.226 14:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:51.226 14:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:51.520 true 00:29:51.520 14:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:51.520 14:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.842 14:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.842 14:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:51.842 14:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:52.121 true 00:29:52.121 14:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:52.121 14:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.381 14:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.381 14:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:52.381 14:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:52.640 true 00:29:52.640 14:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:52.640 14:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.901 14:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.901 14:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:52.901 14:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:53.163 true 00:29:53.163 14:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:53.163 14:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.424 14:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.425 14:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:53.425 14:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:53.686 true 00:29:53.686 14:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:53.686 14:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.947 14:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.208 14:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:54.208 14:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:54.208 true 00:29:54.208 14:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:54.208 14:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.469 14:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.730 14:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:54.730 14:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:54.730 true 00:29:54.730 14:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:54.730 14:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.990 14:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.250 14:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:55.250 14:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:55.250 true 00:29:55.250 14:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:55.251 14:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.512 14:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.773 14:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:29:55.773 14:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:29:56.033 true 00:29:56.033 14:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:56.033 14:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.033 14:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.291 14:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:29:56.291 14:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:29:56.550 true 00:29:56.550 14:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:56.550 14:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.810 14:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.810 14:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:29:56.810 14:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:29:57.071 true 00:29:57.071 14:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:57.071 14:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.331 14:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.331 14:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:29:57.331 14:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:29:57.591 true 00:29:57.591 14:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:57.591 14:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.851 14:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.112 14:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:29:58.112 14:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:29:58.112 true 00:29:58.112 14:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:58.112 14:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.372 14:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.633 14:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:29:58.633 14:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:29:58.633 true 00:29:58.633 14:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:58.633 14:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.893 14:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.153 14:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:29:59.153 14:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:29:59.153 true 00:29:59.414 14:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:59.414 14:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.414 14:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.675 14:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:29:59.675 14:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:29:59.936 true 00:29:59.936 14:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:29:59.936 14:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.936 14:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.197 14:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:00.197 14:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:00.458 true 00:30:00.458 14:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:00.458 14:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.717 14:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.717 14:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:00.717 14:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:00.977 true 00:30:00.977 14:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:00.977 14:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.237 14:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.237 14:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:01.237 14:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:01.497 true 00:30:01.497 14:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:01.497 14:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.757 14:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.757 14:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:01.757 14:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:02.018 true 00:30:02.018 14:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:02.018 14:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.278 14:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.539 14:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:30:02.539 14:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:30:02.539 true 00:30:02.539 14:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:02.539 14:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.799 14:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.059 14:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:30:03.059 14:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:30:03.059 true 00:30:03.059 14:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:03.059 14:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.320 14:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.581 14:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:30:03.581 14:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:30:03.581 true 00:30:03.842 14:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:03.842 14:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.842 14:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.104 14:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:30:04.104 14:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:30:04.104 true 00:30:04.365 14:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:04.365 14:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.365 14:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.626 14:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:30:04.626 14:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:30:04.626 true 00:30:04.886 14:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:04.886 14:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.886 14:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.146 14:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:30:05.146 14:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:05.406 true 00:30:05.406 14:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:05.406 14:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.406 14:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.667 14:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:05.667 14:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:05.928 true 00:30:05.928 14:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:05.928 14:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.928 14:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.189 14:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:30:06.189 14:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:30:06.449 true 00:30:06.449 14:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:06.449 14:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.709 14:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.709 14:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:30:06.709 14:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:30:06.969 true 00:30:06.969 14:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:06.969 14:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.230 14:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.230 14:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:30:07.230 14:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:30:07.490 true 00:30:07.490 14:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:07.490 14:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.749 14:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.010 14:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:30:08.010 14:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:30:08.010 true 00:30:08.010 14:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:08.010 14:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.270 14:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.532 14:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:30:08.532 14:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:30:08.532 true 00:30:08.532 14:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:08.532 14:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.793 14:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.055 14:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:30:09.055 14:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:30:09.055 true 00:30:09.315 14:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:09.315 14:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.315 14:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.577 14:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:30:09.577 14:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:30:09.839 true 00:30:09.839 14:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:09.839 14:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.839 14:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.100 Initializing NVMe Controllers 00:30:10.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.100 Controller IO queue size 128, less than required. 00:30:10.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:10.100 Initialization complete. Launching workers. 00:30:10.100 ======================================================== 00:30:10.100 Latency(us) 00:30:10.100 Device Information : IOPS MiB/s Average min max 00:30:10.100 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30236.30 14.76 4233.09 1485.83 10952.03 00:30:10.100 ======================================================== 00:30:10.100 Total : 30236.30 14.76 4233.09 1485.83 10952.03 00:30:10.100 00:30:10.100 14:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:30:10.100 14:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:30:10.361 true 00:30:10.361 14:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3589565 00:30:10.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3589565) - No such process 00:30:10.361 14:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3589565 00:30:10.361 14:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.361 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:10.622 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:10.622 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:10.622 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:10.622 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:10.622 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:10.883 null0 00:30:10.883 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:10.883 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:10.883 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:10.883 null1 00:30:10.883 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:10.883 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:10.883 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:11.145 null2 00:30:11.145 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:11.145 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:11.145 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:11.406 null3 00:30:11.406 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:11.406 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:11.406 14:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:11.406 null4 00:30:11.668 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:11.668 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:11.668 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:11.668 null5 00:30:11.668 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:11.668 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:11.668 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:11.928 null6 00:30:11.928 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:11.928 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:11.928 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:12.191 null7 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3595760 3595761 3595763 3595765 3595767 3595770 3595771 3595773 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:12.191 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.453 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:12.453 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:12.453 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:12.453 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:12.453 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:12.453 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:12.453 14:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.453 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:12.715 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:12.976 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:12.977 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:12.977 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.237 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:13.498 14:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.499 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:13.760 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.021 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:14.282 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:14.282 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.282 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:14.283 14:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.543 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:14.544 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:14.805 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.066 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:15.328 14:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:15.328 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.328 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.328 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:15.328 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:15.328 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.328 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.328 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:15.328 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.328 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.328 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.589 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:15.850 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:16.111 rmmod nvme_tcp 00:30:16.111 rmmod nvme_fabrics 00:30:16.111 rmmod nvme_keyring 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 3588960 ']' 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 3588960 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3588960 ']' 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3588960 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3588960 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3588960' 00:30:16.111 killing process with pid 3588960 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3588960 00:30:16.111 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3588960 00:30:16.372 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:16.372 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:16.372 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:16.372 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:16.372 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:30:16.372 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:16.372 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:30:16.372 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:16.372 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:16.372 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.372 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.372 14:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:18.920 00:30:18.920 real 0m49.066s 00:30:18.920 user 3m1.366s 00:30:18.920 sys 0m22.872s 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:18.920 ************************************ 00:30:18.920 END TEST nvmf_ns_hotplug_stress 00:30:18.920 ************************************ 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:18.920 ************************************ 00:30:18.920 START TEST nvmf_delete_subsystem 00:30:18.920 ************************************ 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:18.920 * Looking for test storage... 00:30:18.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:18.920 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:18.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.921 --rc genhtml_branch_coverage=1 00:30:18.921 --rc genhtml_function_coverage=1 00:30:18.921 --rc genhtml_legend=1 00:30:18.921 --rc geninfo_all_blocks=1 00:30:18.921 --rc geninfo_unexecuted_blocks=1 00:30:18.921 00:30:18.921 ' 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:18.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.921 --rc genhtml_branch_coverage=1 00:30:18.921 --rc genhtml_function_coverage=1 00:30:18.921 --rc genhtml_legend=1 00:30:18.921 --rc geninfo_all_blocks=1 00:30:18.921 --rc geninfo_unexecuted_blocks=1 00:30:18.921 00:30:18.921 ' 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:18.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.921 --rc genhtml_branch_coverage=1 00:30:18.921 --rc genhtml_function_coverage=1 00:30:18.921 --rc genhtml_legend=1 00:30:18.921 --rc geninfo_all_blocks=1 00:30:18.921 --rc geninfo_unexecuted_blocks=1 00:30:18.921 00:30:18.921 ' 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:18.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.921 --rc genhtml_branch_coverage=1 00:30:18.921 --rc genhtml_function_coverage=1 00:30:18.921 --rc genhtml_legend=1 00:30:18.921 --rc geninfo_all_blocks=1 00:30:18.921 --rc geninfo_unexecuted_blocks=1 00:30:18.921 00:30:18.921 ' 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:18.921 14:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:27.065 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:27.065 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:27.065 Found net devices under 0000:31:00.0: cvl_0_0 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:27.065 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:27.066 Found net devices under 0000:31:00.1: cvl_0_1 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:27.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:30:27.066 00:30:27.066 --- 10.0.0.2 ping statistics --- 00:30:27.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.066 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:27.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:30:27.066 00:30:27.066 --- 10.0.0.1 ping statistics --- 00:30:27.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.066 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=3600994 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 3600994 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3600994 ']' 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:27.066 14:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.066 [2024-10-14 14:44:06.672830] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:27.066 [2024-10-14 14:44:06.674658] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:30:27.066 [2024-10-14 14:44:06.674729] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.066 [2024-10-14 14:44:06.745910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:27.066 [2024-10-14 14:44:06.780306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.066 [2024-10-14 14:44:06.780342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.066 [2024-10-14 14:44:06.780353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.066 [2024-10-14 14:44:06.780361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.066 [2024-10-14 14:44:06.780368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.066 [2024-10-14 14:44:06.781540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.066 [2024-10-14 14:44:06.781542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.066 [2024-10-14 14:44:06.835842] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:27.066 [2024-10-14 14:44:06.836404] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:27.066 [2024-10-14 14:44:06.836736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.066 [2024-10-14 14:44:07.482126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.066 [2024-10-14 14:44:07.510403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.066 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:27.067 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.067 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.067 NULL1 00:30:27.067 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.067 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:27.067 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.067 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.067 Delay0 00:30:27.067 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.067 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.067 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.067 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:27.067 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.067 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3601056 00:30:27.067 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:27.067 14:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:27.067 [2024-10-14 14:44:07.600804] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:28.981 14:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:28.981 14:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:28.981 14:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 [2024-10-14 14:44:09.801510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd06390 is same with the state(6) to be set 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 [2024-10-14 14:44:09.803341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05fd0 is same with the state(6) to be set 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 starting I/O failed: -6 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 [2024-10-14 14:44:09.807774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f35c000d450 is same with the state(6) to be set 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Write completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.242 Read completed with error (sct=0, sc=8) 00:30:29.243 Read completed with error (sct=0, sc=8) 00:30:29.243 Write completed with error (sct=0, sc=8) 00:30:29.243 Read completed with error (sct=0, sc=8) 00:30:29.243 Write completed with error (sct=0, sc=8) 00:30:29.243 Read completed with error (sct=0, sc=8) 00:30:29.243 Read completed with error (sct=0, sc=8) 00:30:29.243 Write completed with error (sct=0, sc=8) 00:30:29.243 Read completed with error (sct=0, sc=8) 00:30:30.185 [2024-10-14 14:44:10.782174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd076b0 is same with the state(6) to be set 00:30:30.185 Read completed with error (sct=0, sc=8) 00:30:30.185 Read completed with error (sct=0, sc=8) 00:30:30.185 Write completed with error (sct=0, sc=8) 00:30:30.185 Read completed with error (sct=0, sc=8) 00:30:30.185 Read completed with error (sct=0, sc=8) 00:30:30.185 Read completed with error (sct=0, sc=8) 00:30:30.185 Write completed with error (sct=0, sc=8) 00:30:30.185 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 [2024-10-14 14:44:10.804782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd061b0 is same with the state(6) to be set 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 [2024-10-14 14:44:10.805088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd066c0 is same with the state(6) to be set 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 [2024-10-14 14:44:10.810239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f35c000cfe0 is same with the state(6) to be set 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Write completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 Read completed with error (sct=0, sc=8) 00:30:30.186 [2024-10-14 14:44:10.810398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f35c000d780 is same with the state(6) to be set 00:30:30.186 Initializing NVMe Controllers 00:30:30.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:30.186 Controller IO queue size 128, less than required. 00:30:30.186 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:30.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:30.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:30.186 Initialization complete. Launching workers. 00:30:30.186 ======================================================== 00:30:30.186 Latency(us) 00:30:30.186 Device Information : IOPS MiB/s Average min max 00:30:30.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.48 0.08 913519.55 643.46 1006440.55 00:30:30.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.48 0.08 917711.62 293.28 1010464.95 00:30:30.186 ======================================================== 00:30:30.186 Total : 321.96 0.16 915609.10 293.28 1010464.95 00:30:30.186 00:30:30.186 [2024-10-14 14:44:10.810946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd076b0 (9): Bad file descriptor 00:30:30.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:30.186 14:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.186 14:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:30.186 14:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3601056 00:30:30.186 14:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3601056 00:30:30.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3601056) - No such process 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3601056 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3601056 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3601056 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:30.758 [2024-10-14 14:44:11.346592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3601848 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3601848 00:30:30.758 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:30.758 [2024-10-14 14:44:11.414853] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:31.329 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:31.329 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3601848 00:30:31.329 14:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:31.900 14:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:31.900 14:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3601848 00:30:31.900 14:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:32.161 14:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:32.161 14:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3601848 00:30:32.161 14:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:32.733 14:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:32.733 14:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3601848 00:30:32.733 14:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:33.305 14:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:33.305 14:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3601848 00:30:33.305 14:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:33.877 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:33.877 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3601848 00:30:33.877 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:33.877 Initializing NVMe Controllers 00:30:33.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:33.877 Controller IO queue size 128, less than required. 00:30:33.877 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:33.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:33.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:33.877 Initialization complete. Launching workers. 00:30:33.877 ======================================================== 00:30:33.877 Latency(us) 00:30:33.877 Device Information : IOPS MiB/s Average min max 00:30:33.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003070.64 1000212.15 1042020.15 00:30:33.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005448.61 1000192.22 1043000.95 00:30:33.877 ======================================================== 00:30:33.877 Total : 256.00 0.12 1004259.63 1000192.22 1043000.95 00:30:33.877 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3601848 00:30:34.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3601848) - No such process 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3601848 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:34.450 rmmod nvme_tcp 00:30:34.450 rmmod nvme_fabrics 00:30:34.450 rmmod nvme_keyring 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 3600994 ']' 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 3600994 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3600994 ']' 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3600994 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:34.450 14:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3600994 00:30:34.450 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:34.450 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:34.450 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3600994' 00:30:34.450 killing process with pid 3600994 00:30:34.450 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3600994 00:30:34.450 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3600994 00:30:34.450 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:34.450 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:34.450 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:34.451 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:34.451 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:30:34.451 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:34.451 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:30:34.451 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:34.451 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:34.451 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.451 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.451 14:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:36.999 00:30:36.999 real 0m18.087s 00:30:36.999 user 0m26.437s 00:30:36.999 sys 0m7.335s 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:36.999 ************************************ 00:30:36.999 END TEST nvmf_delete_subsystem 00:30:36.999 ************************************ 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:36.999 ************************************ 00:30:36.999 START TEST nvmf_host_management 00:30:36.999 ************************************ 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:36.999 * Looking for test storage... 00:30:36.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:36.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.999 --rc genhtml_branch_coverage=1 00:30:36.999 --rc genhtml_function_coverage=1 00:30:36.999 --rc genhtml_legend=1 00:30:36.999 --rc geninfo_all_blocks=1 00:30:36.999 --rc geninfo_unexecuted_blocks=1 00:30:36.999 00:30:36.999 ' 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:36.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.999 --rc genhtml_branch_coverage=1 00:30:36.999 --rc genhtml_function_coverage=1 00:30:36.999 --rc genhtml_legend=1 00:30:36.999 --rc geninfo_all_blocks=1 00:30:36.999 --rc geninfo_unexecuted_blocks=1 00:30:36.999 00:30:36.999 ' 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:36.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.999 --rc genhtml_branch_coverage=1 00:30:36.999 --rc genhtml_function_coverage=1 00:30:36.999 --rc genhtml_legend=1 00:30:36.999 --rc geninfo_all_blocks=1 00:30:36.999 --rc geninfo_unexecuted_blocks=1 00:30:36.999 00:30:36.999 ' 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:36.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:36.999 --rc genhtml_branch_coverage=1 00:30:36.999 --rc genhtml_function_coverage=1 00:30:36.999 --rc genhtml_legend=1 00:30:36.999 --rc geninfo_all_blocks=1 00:30:36.999 --rc geninfo_unexecuted_blocks=1 00:30:36.999 00:30:36.999 ' 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:36.999 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:37.000 14:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.148 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:45.149 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:45.149 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:45.149 Found net devices under 0000:31:00.0: cvl_0_0 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:45.149 Found net devices under 0000:31:00.1: cvl_0_1 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.149 14:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:30:45.149 00:30:45.149 --- 10.0.0.2 ping statistics --- 00:30:45.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.149 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:30:45.149 00:30:45.149 --- 10.0.0.1 ping statistics --- 00:30:45.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.149 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=3606771 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 3606771 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3606771 ']' 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:45.149 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.149 [2024-10-14 14:44:25.136619] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.150 [2024-10-14 14:44:25.137793] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:30:45.150 [2024-10-14 14:44:25.137853] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.150 [2024-10-14 14:44:25.228642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:45.150 [2024-10-14 14:44:25.281310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.150 [2024-10-14 14:44:25.281362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.150 [2024-10-14 14:44:25.281371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.150 [2024-10-14 14:44:25.281378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.150 [2024-10-14 14:44:25.281384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.150 [2024-10-14 14:44:25.283451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:45.150 [2024-10-14 14:44:25.283622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:45.150 [2024-10-14 14:44:25.283788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.150 [2024-10-14 14:44:25.283788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:45.150 [2024-10-14 14:44:25.358954] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.150 [2024-10-14 14:44:25.359691] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.150 [2024-10-14 14:44:25.360370] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:45.150 [2024-10-14 14:44:25.360637] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:45.150 [2024-10-14 14:44:25.360787] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:45.412 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:45.412 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:45.412 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:45.412 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:45.412 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.412 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.412 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:45.412 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.412 14:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.412 [2024-10-14 14:44:25.992666] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.412 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.412 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:45.412 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:45.412 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.412 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:45.412 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:45.412 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:45.412 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.412 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.412 Malloc0 00:30:45.412 [2024-10-14 14:44:26.088873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.412 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.412 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:45.412 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:45.412 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3607113 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3607113 /var/tmp/bdevperf.sock 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3607113 ']' 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:45.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:45.673 { 00:30:45.673 "params": { 00:30:45.673 "name": "Nvme$subsystem", 00:30:45.673 "trtype": "$TEST_TRANSPORT", 00:30:45.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.673 "adrfam": "ipv4", 00:30:45.673 "trsvcid": "$NVMF_PORT", 00:30:45.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.673 "hdgst": ${hdgst:-false}, 00:30:45.673 "ddgst": ${ddgst:-false} 00:30:45.673 }, 00:30:45.673 "method": "bdev_nvme_attach_controller" 00:30:45.673 } 00:30:45.673 EOF 00:30:45.673 )") 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:45.673 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:45.673 "params": { 00:30:45.673 "name": "Nvme0", 00:30:45.673 "trtype": "tcp", 00:30:45.673 "traddr": "10.0.0.2", 00:30:45.673 "adrfam": "ipv4", 00:30:45.673 "trsvcid": "4420", 00:30:45.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:45.673 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:45.673 "hdgst": false, 00:30:45.673 "ddgst": false 00:30:45.673 }, 00:30:45.673 "method": "bdev_nvme_attach_controller" 00:30:45.673 }' 00:30:45.673 [2024-10-14 14:44:26.203533] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:30:45.673 [2024-10-14 14:44:26.203600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3607113 ] 00:30:45.673 [2024-10-14 14:44:26.266604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.673 [2024-10-14 14:44:26.303192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.934 Running I/O for 10 seconds... 00:30:46.507 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:46.507 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:46.507 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:46.507 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.507 14:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.507 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:46.507 [2024-10-14 14:44:27.064872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.064915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.064923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.064930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.064937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.064943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.064951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.064958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.064965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.064972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.064978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.064985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.064992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.064998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.507 [2024-10-14 14:44:27.065212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181c8c0 is same with the state(6) to be set 00:30:46.508 [2024-10-14 14:44:27.065473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.065992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.065999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.066008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.066016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.066025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.066032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.066042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.066051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.066060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.508 [2024-10-14 14:44:27.066074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.508 [2024-10-14 14:44:27.066084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.509 [2024-10-14 14:44:27.066597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.066606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26e4270 is same with the state(6) to be set 00:30:46.509 [2024-10-14 14:44:27.066650] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26e4270 was disconnected and freed. reset controller. 00:30:46.509 [2024-10-14 14:44:27.067936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:46.509 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.509 task offset: 81920 on job bdev=Nvme0n1 fails 00:30:46.509 00:30:46.509 Latency(us) 00:30:46.509 [2024-10-14T12:44:27.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.509 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:46.509 Job: Nvme0n1 ended in about 0.46 seconds with error 00:30:46.509 Verification LBA range: start 0x0 length 0x400 00:30:46.509 Nvme0n1 : 0.46 1396.59 87.29 139.66 0.00 40514.11 4068.69 36700.16 00:30:46.509 [2024-10-14T12:44:27.236Z] =================================================================================================================== 00:30:46.509 [2024-10-14T12:44:27.236Z] Total : 1396.59 87.29 139.66 0.00 40514.11 4068.69 36700.16 00:30:46.509 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:46.509 [2024-10-14 14:44:27.069972] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:46.509 [2024-10-14 14:44:27.069996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cb100 (9): Bad file descriptor 00:30:46.509 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:46.509 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:46.509 [2024-10-14 14:44:27.070947] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:46.509 [2024-10-14 14:44:27.071023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:46.509 [2024-10-14 14:44:27.071044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.509 [2024-10-14 14:44:27.071059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:46.509 [2024-10-14 14:44:27.071076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:46.509 [2024-10-14 14:44:27.071083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:46.509 [2024-10-14 14:44:27.071090] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24cb100 00:30:46.509 [2024-10-14 14:44:27.071109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cb100 (9): Bad file descriptor 00:30:46.510 [2024-10-14 14:44:27.071121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:46.510 [2024-10-14 14:44:27.071129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:46.510 [2024-10-14 14:44:27.071138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:46.510 [2024-10-14 14:44:27.071151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.510 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:46.510 14:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:47.451 14:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3607113 00:30:47.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3607113) - No such process 00:30:47.451 14:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:47.451 14:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:47.451 14:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:47.451 14:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:47.451 14:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:47.451 14:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:47.451 14:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:47.451 14:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:47.451 { 00:30:47.451 "params": { 00:30:47.451 "name": "Nvme$subsystem", 00:30:47.451 "trtype": "$TEST_TRANSPORT", 00:30:47.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.451 "adrfam": "ipv4", 00:30:47.451 "trsvcid": "$NVMF_PORT", 00:30:47.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.451 "hdgst": ${hdgst:-false}, 00:30:47.451 "ddgst": ${ddgst:-false} 00:30:47.451 }, 00:30:47.451 "method": "bdev_nvme_attach_controller" 00:30:47.451 } 00:30:47.451 EOF 00:30:47.451 )") 00:30:47.451 14:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:47.451 14:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:47.451 14:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:47.451 14:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:47.451 "params": { 00:30:47.451 "name": "Nvme0", 00:30:47.451 "trtype": "tcp", 00:30:47.451 "traddr": "10.0.0.2", 00:30:47.451 "adrfam": "ipv4", 00:30:47.451 "trsvcid": "4420", 00:30:47.451 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:47.451 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:47.451 "hdgst": false, 00:30:47.451 "ddgst": false 00:30:47.451 }, 00:30:47.451 "method": "bdev_nvme_attach_controller" 00:30:47.451 }' 00:30:47.451 [2024-10-14 14:44:28.141050] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:30:47.451 [2024-10-14 14:44:28.141113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3607493 ] 00:30:47.710 [2024-10-14 14:44:28.202115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.710 [2024-10-14 14:44:28.237248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.710 Running I/O for 1 seconds... 00:30:49.093 1408.00 IOPS, 88.00 MiB/s 00:30:49.093 Latency(us) 00:30:49.093 [2024-10-14T12:44:29.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.093 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:49.093 Verification LBA range: start 0x0 length 0x400 00:30:49.093 Nvme0n1 : 1.01 1451.14 90.70 0.00 0.00 43380.61 9011.20 37792.43 00:30:49.093 [2024-10-14T12:44:29.820Z] =================================================================================================================== 00:30:49.093 [2024-10-14T12:44:29.820Z] Total : 1451.14 90.70 0.00 0.00 43380.61 9011.20 37792.43 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:49.093 rmmod nvme_tcp 00:30:49.093 rmmod nvme_fabrics 00:30:49.093 rmmod nvme_keyring 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 3606771 ']' 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 3606771 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3606771 ']' 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3606771 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3606771 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3606771' 00:30:49.093 killing process with pid 3606771 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3606771 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3606771 00:30:49.093 [2024-10-14 14:44:29.796946] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:49.093 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:30:49.354 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:49.354 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:30:49.354 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:49.354 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:49.354 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:49.354 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:49.354 14:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.266 14:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:51.266 14:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:51.266 00:30:51.266 real 0m14.587s 00:30:51.266 user 0m18.693s 00:30:51.266 sys 0m7.566s 00:30:51.266 14:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:51.266 14:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:51.266 ************************************ 00:30:51.266 END TEST nvmf_host_management 00:30:51.266 ************************************ 00:30:51.266 14:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:51.266 14:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:51.266 14:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:51.266 14:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:51.266 ************************************ 00:30:51.266 START TEST nvmf_lvol 00:30:51.266 ************************************ 00:30:51.266 14:44:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:51.527 * Looking for test storage... 00:30:51.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:51.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.527 --rc genhtml_branch_coverage=1 00:30:51.527 --rc genhtml_function_coverage=1 00:30:51.527 --rc genhtml_legend=1 00:30:51.527 --rc geninfo_all_blocks=1 00:30:51.527 --rc geninfo_unexecuted_blocks=1 00:30:51.527 00:30:51.527 ' 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:51.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.527 --rc genhtml_branch_coverage=1 00:30:51.527 --rc genhtml_function_coverage=1 00:30:51.527 --rc genhtml_legend=1 00:30:51.527 --rc geninfo_all_blocks=1 00:30:51.527 --rc geninfo_unexecuted_blocks=1 00:30:51.527 00:30:51.527 ' 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:51.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.527 --rc genhtml_branch_coverage=1 00:30:51.527 --rc genhtml_function_coverage=1 00:30:51.527 --rc genhtml_legend=1 00:30:51.527 --rc geninfo_all_blocks=1 00:30:51.527 --rc geninfo_unexecuted_blocks=1 00:30:51.527 00:30:51.527 ' 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:51.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.527 --rc genhtml_branch_coverage=1 00:30:51.527 --rc genhtml_function_coverage=1 00:30:51.527 --rc genhtml_legend=1 00:30:51.527 --rc geninfo_all_blocks=1 00:30:51.527 --rc geninfo_unexecuted_blocks=1 00:30:51.527 00:30:51.527 ' 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.527 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:51.528 14:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:59.671 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.671 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:59.672 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:59.672 Found net devices under 0000:31:00.0: cvl_0_0 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:59.672 Found net devices under 0000:31:00.1: cvl_0_1 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:59.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:30:59.672 00:30:59.672 --- 10.0.0.2 ping statistics --- 00:30:59.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.672 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:30:59.672 00:30:59.672 --- 10.0.0.1 ping statistics --- 00:30:59.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.672 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=3611901 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 3611901 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3611901 ']' 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:59.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:59.672 14:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:59.672 [2024-10-14 14:44:39.653855] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:59.672 [2024-10-14 14:44:39.654992] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:30:59.672 [2024-10-14 14:44:39.655043] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.672 [2024-10-14 14:44:39.728194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:59.672 [2024-10-14 14:44:39.770886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.672 [2024-10-14 14:44:39.770921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.672 [2024-10-14 14:44:39.770930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.672 [2024-10-14 14:44:39.770937] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.672 [2024-10-14 14:44:39.770944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.672 [2024-10-14 14:44:39.772601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.672 [2024-10-14 14:44:39.772717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:59.672 [2024-10-14 14:44:39.772719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.672 [2024-10-14 14:44:39.828976] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:59.672 [2024-10-14 14:44:39.829413] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:59.672 [2024-10-14 14:44:39.829786] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:59.672 [2024-10-14 14:44:39.830060] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:59.934 14:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:59.934 14:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:30:59.934 14:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:59.934 14:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:59.934 14:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:59.934 14:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.934 14:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:59.934 [2024-10-14 14:44:40.653611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.194 14:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:00.194 14:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:00.194 14:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:00.455 14:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:00.455 14:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:00.715 14:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:00.715 14:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d18f2e60-de5b-495a-8054-a011ef8c7590 00:31:00.715 14:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d18f2e60-de5b-495a-8054-a011ef8c7590 lvol 20 00:31:00.975 14:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3cb8f7fe-91c4-4373-a898-48d138ed3a75 00:31:00.975 14:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:01.236 14:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3cb8f7fe-91c4-4373-a898-48d138ed3a75 00:31:01.236 14:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:01.496 [2024-10-14 14:44:42.085391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.496 14:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:01.757 14:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3612491 00:31:01.757 14:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:01.757 14:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:02.698 14:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3cb8f7fe-91c4-4373-a898-48d138ed3a75 MY_SNAPSHOT 00:31:02.959 14:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1f16bd92-e85f-4445-ab3a-49d89349800f 00:31:02.959 14:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3cb8f7fe-91c4-4373-a898-48d138ed3a75 30 00:31:02.959 14:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1f16bd92-e85f-4445-ab3a-49d89349800f MY_CLONE 00:31:03.219 14:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=acdc7fa6-0e31-404f-96b9-0c802280ebcc 00:31:03.219 14:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate acdc7fa6-0e31-404f-96b9-0c802280ebcc 00:31:03.789 14:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3612491 00:31:11.929 Initializing NVMe Controllers 00:31:11.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:11.929 Controller IO queue size 128, less than required. 00:31:11.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:11.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:11.929 Initialization complete. Launching workers. 00:31:11.929 ======================================================== 00:31:11.929 Latency(us) 00:31:11.929 Device Information : IOPS MiB/s Average min max 00:31:11.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12269.06 47.93 10440.51 1529.47 67531.47 00:31:11.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15364.92 60.02 8332.09 549.64 63280.18 00:31:11.929 ======================================================== 00:31:11.929 Total : 27633.98 107.95 9268.19 549.64 67531.47 00:31:11.929 00:31:11.929 14:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:12.191 14:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3cb8f7fe-91c4-4373-a898-48d138ed3a75 00:31:12.191 14:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d18f2e60-de5b-495a-8054-a011ef8c7590 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:12.451 rmmod nvme_tcp 00:31:12.451 rmmod nvme_fabrics 00:31:12.451 rmmod nvme_keyring 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 3611901 ']' 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 3611901 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3611901 ']' 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3611901 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3611901 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3611901' 00:31:12.451 killing process with pid 3611901 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3611901 00:31:12.451 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3611901 00:31:12.711 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:12.711 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:12.711 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:12.711 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:12.712 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:31:12.712 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:12.712 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:31:12.712 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:12.712 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:12.712 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.712 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.712 14:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:15.257 00:31:15.257 real 0m23.401s 00:31:15.257 user 0m54.852s 00:31:15.257 sys 0m10.670s 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:15.257 ************************************ 00:31:15.257 END TEST nvmf_lvol 00:31:15.257 ************************************ 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:15.257 ************************************ 00:31:15.257 START TEST nvmf_lvs_grow 00:31:15.257 ************************************ 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:15.257 * Looking for test storage... 00:31:15.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:15.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.257 --rc genhtml_branch_coverage=1 00:31:15.257 --rc genhtml_function_coverage=1 00:31:15.257 --rc genhtml_legend=1 00:31:15.257 --rc geninfo_all_blocks=1 00:31:15.257 --rc geninfo_unexecuted_blocks=1 00:31:15.257 00:31:15.257 ' 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:15.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.257 --rc genhtml_branch_coverage=1 00:31:15.257 --rc genhtml_function_coverage=1 00:31:15.257 --rc genhtml_legend=1 00:31:15.257 --rc geninfo_all_blocks=1 00:31:15.257 --rc geninfo_unexecuted_blocks=1 00:31:15.257 00:31:15.257 ' 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:15.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.257 --rc genhtml_branch_coverage=1 00:31:15.257 --rc genhtml_function_coverage=1 00:31:15.257 --rc genhtml_legend=1 00:31:15.257 --rc geninfo_all_blocks=1 00:31:15.257 --rc geninfo_unexecuted_blocks=1 00:31:15.257 00:31:15.257 ' 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:15.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:15.257 --rc genhtml_branch_coverage=1 00:31:15.257 --rc genhtml_function_coverage=1 00:31:15.257 --rc genhtml_legend=1 00:31:15.257 --rc geninfo_all_blocks=1 00:31:15.257 --rc geninfo_unexecuted_blocks=1 00:31:15.257 00:31:15.257 ' 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.257 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:15.258 14:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:23.548 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:23.548 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:23.548 Found net devices under 0000:31:00.0: cvl_0_0 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:23.548 Found net devices under 0000:31:00.1: cvl_0_1 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.548 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:23.549 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.549 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.549 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:23.549 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:23.549 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.549 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.549 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:23.549 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:23.549 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.549 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.549 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.549 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.549 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:23.549 14:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:23.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:31:23.549 00:31:23.549 --- 10.0.0.2 ping statistics --- 00:31:23.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.549 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:31:23.549 00:31:23.549 --- 10.0.0.1 ping statistics --- 00:31:23.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.549 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=3618812 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 3618812 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3618812 ']' 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:23.549 14:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:23.549 [2024-10-14 14:45:03.233409] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:23.549 [2024-10-14 14:45:03.234569] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:31:23.549 [2024-10-14 14:45:03.234623] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.549 [2024-10-14 14:45:03.308016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.549 [2024-10-14 14:45:03.351394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.549 [2024-10-14 14:45:03.351432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.549 [2024-10-14 14:45:03.351439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.549 [2024-10-14 14:45:03.351446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.549 [2024-10-14 14:45:03.351452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.549 [2024-10-14 14:45:03.352057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.549 [2024-10-14 14:45:03.408112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:23.549 [2024-10-14 14:45:03.408370] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:23.549 [2024-10-14 14:45:04.216858] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:23.549 ************************************ 00:31:23.549 START TEST lvs_grow_clean 00:31:23.549 ************************************ 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:23.549 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:23.550 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:23.810 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:23.810 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:24.071 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3f0d2ddc-3549-40bb-9f43-d7923cebbb4d 00:31:24.071 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0d2ddc-3549-40bb-9f43-d7923cebbb4d 00:31:24.071 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:24.332 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:24.332 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:24.332 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3f0d2ddc-3549-40bb-9f43-d7923cebbb4d lvol 150 00:31:24.332 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=57288abd-0044-4588-bfe9-6ca6a5f05936 00:31:24.332 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:24.332 14:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:24.593 [2024-10-14 14:45:05.144443] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:24.593 [2024-10-14 14:45:05.144541] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:24.593 true 00:31:24.593 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0d2ddc-3549-40bb-9f43-d7923cebbb4d 00:31:24.593 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:24.853 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:24.853 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:24.853 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 57288abd-0044-4588-bfe9-6ca6a5f05936 00:31:25.114 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:25.114 [2024-10-14 14:45:05.796677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.114 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:25.375 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:25.375 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3619501 00:31:25.375 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:25.375 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3619501 /var/tmp/bdevperf.sock 00:31:25.375 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3619501 ']' 00:31:25.375 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:25.375 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:25.375 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:25.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:25.375 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:25.375 14:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:25.375 [2024-10-14 14:45:06.017439] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:31:25.375 [2024-10-14 14:45:06.017501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3619501 ] 00:31:25.375 [2024-10-14 14:45:06.100964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.636 [2024-10-14 14:45:06.136972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.207 14:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:26.207 14:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:31:26.207 14:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:26.467 Nvme0n1 00:31:26.467 14:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:26.467 [ 00:31:26.467 { 00:31:26.467 "name": "Nvme0n1", 00:31:26.467 "aliases": [ 00:31:26.467 "57288abd-0044-4588-bfe9-6ca6a5f05936" 00:31:26.467 ], 00:31:26.467 "product_name": "NVMe disk", 00:31:26.467 "block_size": 4096, 00:31:26.467 "num_blocks": 38912, 00:31:26.467 "uuid": "57288abd-0044-4588-bfe9-6ca6a5f05936", 00:31:26.467 "numa_id": 0, 00:31:26.467 "assigned_rate_limits": { 00:31:26.467 "rw_ios_per_sec": 0, 00:31:26.467 "rw_mbytes_per_sec": 0, 00:31:26.467 "r_mbytes_per_sec": 0, 00:31:26.467 "w_mbytes_per_sec": 0 00:31:26.467 }, 00:31:26.467 "claimed": false, 00:31:26.467 "zoned": false, 00:31:26.467 "supported_io_types": { 00:31:26.467 "read": true, 00:31:26.467 "write": true, 00:31:26.467 "unmap": true, 00:31:26.467 "flush": true, 00:31:26.467 "reset": true, 00:31:26.467 "nvme_admin": true, 00:31:26.467 "nvme_io": true, 00:31:26.467 "nvme_io_md": false, 00:31:26.467 "write_zeroes": true, 00:31:26.467 "zcopy": false, 00:31:26.467 "get_zone_info": false, 00:31:26.467 "zone_management": false, 00:31:26.467 "zone_append": false, 00:31:26.467 "compare": true, 00:31:26.467 "compare_and_write": true, 00:31:26.467 "abort": true, 00:31:26.467 "seek_hole": false, 00:31:26.467 "seek_data": false, 00:31:26.467 "copy": true, 00:31:26.467 "nvme_iov_md": false 00:31:26.467 }, 00:31:26.467 "memory_domains": [ 00:31:26.467 { 00:31:26.467 "dma_device_id": "system", 00:31:26.467 "dma_device_type": 1 00:31:26.467 } 00:31:26.467 ], 00:31:26.467 "driver_specific": { 00:31:26.467 "nvme": [ 00:31:26.467 { 00:31:26.467 "trid": { 00:31:26.467 "trtype": "TCP", 00:31:26.467 "adrfam": "IPv4", 00:31:26.467 "traddr": "10.0.0.2", 00:31:26.467 "trsvcid": "4420", 00:31:26.467 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:26.467 }, 00:31:26.467 "ctrlr_data": { 00:31:26.467 "cntlid": 1, 00:31:26.467 "vendor_id": "0x8086", 00:31:26.467 "model_number": "SPDK bdev Controller", 00:31:26.467 "serial_number": "SPDK0", 00:31:26.467 "firmware_revision": "25.01", 00:31:26.467 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:26.467 "oacs": { 00:31:26.467 "security": 0, 00:31:26.467 "format": 0, 00:31:26.467 "firmware": 0, 00:31:26.467 "ns_manage": 0 00:31:26.467 }, 00:31:26.467 "multi_ctrlr": true, 00:31:26.467 "ana_reporting": false 00:31:26.467 }, 00:31:26.467 "vs": { 00:31:26.467 "nvme_version": "1.3" 00:31:26.467 }, 00:31:26.467 "ns_data": { 00:31:26.467 "id": 1, 00:31:26.467 "can_share": true 00:31:26.467 } 00:31:26.467 } 00:31:26.467 ], 00:31:26.467 "mp_policy": "active_passive" 00:31:26.467 } 00:31:26.467 } 00:31:26.467 ] 00:31:26.467 14:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3619690 00:31:26.467 14:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:26.467 14:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:26.728 Running I/O for 10 seconds... 00:31:27.668 Latency(us) 00:31:27.668 [2024-10-14T12:45:08.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:27.668 Nvme0n1 : 1.00 17833.00 69.66 0.00 0.00 0.00 0.00 0.00 00:31:27.668 [2024-10-14T12:45:08.395Z] =================================================================================================================== 00:31:27.668 [2024-10-14T12:45:08.395Z] Total : 17833.00 69.66 0.00 0.00 0.00 0.00 0.00 00:31:27.668 00:31:28.608 14:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3f0d2ddc-3549-40bb-9f43-d7923cebbb4d 00:31:28.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:28.608 Nvme0n1 : 2.00 17954.50 70.13 0.00 0.00 0.00 0.00 0.00 00:31:28.608 [2024-10-14T12:45:09.335Z] =================================================================================================================== 00:31:28.608 [2024-10-14T12:45:09.335Z] Total : 17954.50 70.13 0.00 0.00 0.00 0.00 0.00 00:31:28.608 00:31:28.608 true 00:31:28.868 14:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0d2ddc-3549-40bb-9f43-d7923cebbb4d 00:31:28.868 14:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:28.868 14:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:28.868 14:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:28.868 14:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3619690 00:31:29.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:29.811 Nvme0n1 : 3.00 17971.67 70.20 0.00 0.00 0.00 0.00 0.00 00:31:29.811 [2024-10-14T12:45:10.538Z] =================================================================================================================== 00:31:29.811 [2024-10-14T12:45:10.538Z] Total : 17971.67 70.20 0.00 0.00 0.00 0.00 0.00 00:31:29.811 00:31:30.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:30.750 Nvme0n1 : 4.00 18003.50 70.33 0.00 0.00 0.00 0.00 0.00 00:31:30.750 [2024-10-14T12:45:11.477Z] =================================================================================================================== 00:31:30.750 [2024-10-14T12:45:11.477Z] Total : 18003.50 70.33 0.00 0.00 0.00 0.00 0.00 00:31:30.750 00:31:31.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:31.692 Nvme0n1 : 5.00 18021.80 70.40 0.00 0.00 0.00 0.00 0.00 00:31:31.692 [2024-10-14T12:45:12.419Z] =================================================================================================================== 00:31:31.692 [2024-10-14T12:45:12.419Z] Total : 18021.80 70.40 0.00 0.00 0.00 0.00 0.00 00:31:31.692 00:31:32.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:32.632 Nvme0n1 : 6.00 18036.67 70.46 0.00 0.00 0.00 0.00 0.00 00:31:32.632 [2024-10-14T12:45:13.359Z] =================================================================================================================== 00:31:32.632 [2024-10-14T12:45:13.360Z] Total : 18036.67 70.46 0.00 0.00 0.00 0.00 0.00 00:31:32.633 00:31:33.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:33.573 Nvme0n1 : 7.00 18047.57 70.50 0.00 0.00 0.00 0.00 0.00 00:31:33.573 [2024-10-14T12:45:14.301Z] =================================================================================================================== 00:31:33.574 [2024-10-14T12:45:14.301Z] Total : 18047.57 70.50 0.00 0.00 0.00 0.00 0.00 00:31:33.574 00:31:34.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:34.956 Nvme0n1 : 8.00 18055.50 70.53 0.00 0.00 0.00 0.00 0.00 00:31:34.956 [2024-10-14T12:45:15.683Z] =================================================================================================================== 00:31:34.956 [2024-10-14T12:45:15.683Z] Total : 18055.50 70.53 0.00 0.00 0.00 0.00 0.00 00:31:34.956 00:31:35.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:35.528 Nvme0n1 : 9.00 18061.89 70.55 0.00 0.00 0.00 0.00 0.00 00:31:35.528 [2024-10-14T12:45:16.255Z] =================================================================================================================== 00:31:35.528 [2024-10-14T12:45:16.255Z] Total : 18061.89 70.55 0.00 0.00 0.00 0.00 0.00 00:31:35.528 00:31:36.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:36.911 Nvme0n1 : 10.00 18066.80 70.57 0.00 0.00 0.00 0.00 0.00 00:31:36.911 [2024-10-14T12:45:17.638Z] =================================================================================================================== 00:31:36.911 [2024-10-14T12:45:17.638Z] Total : 18066.80 70.57 0.00 0.00 0.00 0.00 0.00 00:31:36.911 00:31:36.911 00:31:36.911 Latency(us) 00:31:36.911 [2024-10-14T12:45:17.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:36.911 Nvme0n1 : 10.01 18067.03 70.57 0.00 0.00 7081.90 2184.53 12724.91 00:31:36.911 [2024-10-14T12:45:17.638Z] =================================================================================================================== 00:31:36.911 [2024-10-14T12:45:17.638Z] Total : 18067.03 70.57 0.00 0.00 7081.90 2184.53 12724.91 00:31:36.911 { 00:31:36.911 "results": [ 00:31:36.911 { 00:31:36.911 "job": "Nvme0n1", 00:31:36.911 "core_mask": "0x2", 00:31:36.911 "workload": "randwrite", 00:31:36.911 "status": "finished", 00:31:36.911 "queue_depth": 128, 00:31:36.911 "io_size": 4096, 00:31:36.911 "runtime": 10.006955, 00:31:36.911 "iops": 18067.034377590386, 00:31:36.911 "mibps": 70.57435303746244, 00:31:36.911 "io_failed": 0, 00:31:36.911 "io_timeout": 0, 00:31:36.911 "avg_latency_us": 7081.897743460401, 00:31:36.911 "min_latency_us": 2184.5333333333333, 00:31:36.911 "max_latency_us": 12724.906666666666 00:31:36.911 } 00:31:36.911 ], 00:31:36.911 "core_count": 1 00:31:36.911 } 00:31:36.911 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3619501 00:31:36.911 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3619501 ']' 00:31:36.911 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3619501 00:31:36.911 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:31:36.911 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:36.911 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3619501 00:31:36.911 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:36.911 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:36.911 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3619501' 00:31:36.911 killing process with pid 3619501 00:31:36.911 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3619501 00:31:36.911 Received shutdown signal, test time was about 10.000000 seconds 00:31:36.911 00:31:36.911 Latency(us) 00:31:36.911 [2024-10-14T12:45:17.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:36.911 [2024-10-14T12:45:17.638Z] =================================================================================================================== 00:31:36.911 [2024-10-14T12:45:17.638Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:36.911 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3619501 00:31:36.911 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:36.911 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:37.172 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0d2ddc-3549-40bb-9f43-d7923cebbb4d 00:31:37.172 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:37.432 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:37.432 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:37.432 14:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:37.432 [2024-10-14 14:45:18.144437] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0d2ddc-3549-40bb-9f43-d7923cebbb4d 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0d2ddc-3549-40bb-9f43-d7923cebbb4d 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0d2ddc-3549-40bb-9f43-d7923cebbb4d 00:31:37.693 request: 00:31:37.693 { 00:31:37.693 "uuid": "3f0d2ddc-3549-40bb-9f43-d7923cebbb4d", 00:31:37.693 "method": "bdev_lvol_get_lvstores", 00:31:37.693 "req_id": 1 00:31:37.693 } 00:31:37.693 Got JSON-RPC error response 00:31:37.693 response: 00:31:37.693 { 00:31:37.693 "code": -19, 00:31:37.693 "message": "No such device" 00:31:37.693 } 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:37.693 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:37.953 aio_bdev 00:31:37.953 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 57288abd-0044-4588-bfe9-6ca6a5f05936 00:31:37.953 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=57288abd-0044-4588-bfe9-6ca6a5f05936 00:31:37.953 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:37.953 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:31:37.953 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:37.953 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:37.953 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:37.953 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 57288abd-0044-4588-bfe9-6ca6a5f05936 -t 2000 00:31:38.213 [ 00:31:38.213 { 00:31:38.213 "name": "57288abd-0044-4588-bfe9-6ca6a5f05936", 00:31:38.213 "aliases": [ 00:31:38.213 "lvs/lvol" 00:31:38.213 ], 00:31:38.213 "product_name": "Logical Volume", 00:31:38.213 "block_size": 4096, 00:31:38.213 "num_blocks": 38912, 00:31:38.213 "uuid": "57288abd-0044-4588-bfe9-6ca6a5f05936", 00:31:38.213 "assigned_rate_limits": { 00:31:38.213 "rw_ios_per_sec": 0, 00:31:38.213 "rw_mbytes_per_sec": 0, 00:31:38.213 "r_mbytes_per_sec": 0, 00:31:38.213 "w_mbytes_per_sec": 0 00:31:38.213 }, 00:31:38.213 "claimed": false, 00:31:38.213 "zoned": false, 00:31:38.213 "supported_io_types": { 00:31:38.213 "read": true, 00:31:38.213 "write": true, 00:31:38.213 "unmap": true, 00:31:38.213 "flush": false, 00:31:38.213 "reset": true, 00:31:38.213 "nvme_admin": false, 00:31:38.213 "nvme_io": false, 00:31:38.213 "nvme_io_md": false, 00:31:38.214 "write_zeroes": true, 00:31:38.214 "zcopy": false, 00:31:38.214 "get_zone_info": false, 00:31:38.214 "zone_management": false, 00:31:38.214 "zone_append": false, 00:31:38.214 "compare": false, 00:31:38.214 "compare_and_write": false, 00:31:38.214 "abort": false, 00:31:38.214 "seek_hole": true, 00:31:38.214 "seek_data": true, 00:31:38.214 "copy": false, 00:31:38.214 "nvme_iov_md": false 00:31:38.214 }, 00:31:38.214 "driver_specific": { 00:31:38.214 "lvol": { 00:31:38.214 "lvol_store_uuid": "3f0d2ddc-3549-40bb-9f43-d7923cebbb4d", 00:31:38.214 "base_bdev": "aio_bdev", 00:31:38.214 "thin_provision": false, 00:31:38.214 "num_allocated_clusters": 38, 00:31:38.214 "snapshot": false, 00:31:38.214 "clone": false, 00:31:38.214 "esnap_clone": false 00:31:38.214 } 00:31:38.214 } 00:31:38.214 } 00:31:38.214 ] 00:31:38.214 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:31:38.214 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0d2ddc-3549-40bb-9f43-d7923cebbb4d 00:31:38.214 14:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:38.474 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:38.474 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0d2ddc-3549-40bb-9f43-d7923cebbb4d 00:31:38.474 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:38.474 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:38.474 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 57288abd-0044-4588-bfe9-6ca6a5f05936 00:31:38.734 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3f0d2ddc-3549-40bb-9f43-d7923cebbb4d 00:31:38.994 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:38.994 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:38.994 00:31:38.994 real 0m15.456s 00:31:38.994 user 0m15.196s 00:31:38.994 sys 0m1.256s 00:31:38.994 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:38.994 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:38.994 ************************************ 00:31:38.994 END TEST lvs_grow_clean 00:31:38.994 ************************************ 00:31:39.255 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:39.255 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:39.255 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:39.255 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:39.255 ************************************ 00:31:39.255 START TEST lvs_grow_dirty 00:31:39.255 ************************************ 00:31:39.255 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:31:39.255 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:39.255 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:39.255 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:39.255 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:39.255 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:39.255 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:39.255 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:39.255 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:39.255 14:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:39.515 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:39.515 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:39.515 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6b10d5b2-85eb-48dd-885a-31445116a602 00:31:39.515 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b10d5b2-85eb-48dd-885a-31445116a602 00:31:39.515 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:39.775 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:39.775 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:39.775 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6b10d5b2-85eb-48dd-885a-31445116a602 lvol 150 00:31:40.036 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=66965bef-4bc5-4bb1-affb-d7d7a8216e58 00:31:40.036 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:40.036 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:40.036 [2024-10-14 14:45:20.660403] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:40.036 [2024-10-14 14:45:20.660468] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:40.036 true 00:31:40.036 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b10d5b2-85eb-48dd-885a-31445116a602 00:31:40.036 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:40.297 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:40.297 14:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:40.297 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 66965bef-4bc5-4bb1-affb-d7d7a8216e58 00:31:40.558 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:40.819 [2024-10-14 14:45:21.316660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.819 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:40.819 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3622894 00:31:40.819 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:40.819 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:40.819 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3622894 /var/tmp/bdevperf.sock 00:31:40.819 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3622894 ']' 00:31:40.819 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:40.819 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:40.819 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:40.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:40.819 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:40.819 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:40.819 [2024-10-14 14:45:21.524956] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:31:40.819 [2024-10-14 14:45:21.525001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3622894 ] 00:31:41.079 [2024-10-14 14:45:21.592679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.079 [2024-10-14 14:45:21.622516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:41.079 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:41.079 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:31:41.079 14:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:41.339 Nvme0n1 00:31:41.339 14:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:41.599 [ 00:31:41.599 { 00:31:41.599 "name": "Nvme0n1", 00:31:41.599 "aliases": [ 00:31:41.599 "66965bef-4bc5-4bb1-affb-d7d7a8216e58" 00:31:41.599 ], 00:31:41.599 "product_name": "NVMe disk", 00:31:41.599 "block_size": 4096, 00:31:41.599 "num_blocks": 38912, 00:31:41.599 "uuid": "66965bef-4bc5-4bb1-affb-d7d7a8216e58", 00:31:41.599 "numa_id": 0, 00:31:41.599 "assigned_rate_limits": { 00:31:41.599 "rw_ios_per_sec": 0, 00:31:41.599 "rw_mbytes_per_sec": 0, 00:31:41.599 "r_mbytes_per_sec": 0, 00:31:41.599 "w_mbytes_per_sec": 0 00:31:41.599 }, 00:31:41.599 "claimed": false, 00:31:41.599 "zoned": false, 00:31:41.599 "supported_io_types": { 00:31:41.599 "read": true, 00:31:41.599 "write": true, 00:31:41.599 "unmap": true, 00:31:41.599 "flush": true, 00:31:41.599 "reset": true, 00:31:41.599 "nvme_admin": true, 00:31:41.599 "nvme_io": true, 00:31:41.599 "nvme_io_md": false, 00:31:41.599 "write_zeroes": true, 00:31:41.599 "zcopy": false, 00:31:41.599 "get_zone_info": false, 00:31:41.599 "zone_management": false, 00:31:41.599 "zone_append": false, 00:31:41.599 "compare": true, 00:31:41.599 "compare_and_write": true, 00:31:41.599 "abort": true, 00:31:41.599 "seek_hole": false, 00:31:41.599 "seek_data": false, 00:31:41.599 "copy": true, 00:31:41.599 "nvme_iov_md": false 00:31:41.599 }, 00:31:41.599 "memory_domains": [ 00:31:41.599 { 00:31:41.599 "dma_device_id": "system", 00:31:41.599 "dma_device_type": 1 00:31:41.599 } 00:31:41.599 ], 00:31:41.599 "driver_specific": { 00:31:41.599 "nvme": [ 00:31:41.599 { 00:31:41.599 "trid": { 00:31:41.599 "trtype": "TCP", 00:31:41.599 "adrfam": "IPv4", 00:31:41.599 "traddr": "10.0.0.2", 00:31:41.599 "trsvcid": "4420", 00:31:41.599 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:41.599 }, 00:31:41.600 "ctrlr_data": { 00:31:41.600 "cntlid": 1, 00:31:41.600 "vendor_id": "0x8086", 00:31:41.600 "model_number": "SPDK bdev Controller", 00:31:41.600 "serial_number": "SPDK0", 00:31:41.600 "firmware_revision": "25.01", 00:31:41.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:41.600 "oacs": { 00:31:41.600 "security": 0, 00:31:41.600 "format": 0, 00:31:41.600 "firmware": 0, 00:31:41.600 "ns_manage": 0 00:31:41.600 }, 00:31:41.600 "multi_ctrlr": true, 00:31:41.600 "ana_reporting": false 00:31:41.600 }, 00:31:41.600 "vs": { 00:31:41.600 "nvme_version": "1.3" 00:31:41.600 }, 00:31:41.600 "ns_data": { 00:31:41.600 "id": 1, 00:31:41.600 "can_share": true 00:31:41.600 } 00:31:41.600 } 00:31:41.600 ], 00:31:41.600 "mp_policy": "active_passive" 00:31:41.600 } 00:31:41.600 } 00:31:41.600 ] 00:31:41.600 14:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3623034 00:31:41.600 14:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:41.600 14:45:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:41.600 Running I/O for 10 seconds... 00:31:42.560 Latency(us) 00:31:42.560 [2024-10-14T12:45:23.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:42.560 Nvme0n1 : 1.00 17789.00 69.49 0.00 0.00 0.00 0.00 0.00 00:31:42.560 [2024-10-14T12:45:23.287Z] =================================================================================================================== 00:31:42.560 [2024-10-14T12:45:23.287Z] Total : 17789.00 69.49 0.00 0.00 0.00 0.00 0.00 00:31:42.560 00:31:43.500 14:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6b10d5b2-85eb-48dd-885a-31445116a602 00:31:43.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:43.760 Nvme0n1 : 2.00 17887.00 69.87 0.00 0.00 0.00 0.00 0.00 00:31:43.760 [2024-10-14T12:45:24.487Z] =================================================================================================================== 00:31:43.760 [2024-10-14T12:45:24.487Z] Total : 17887.00 69.87 0.00 0.00 0.00 0.00 0.00 00:31:43.760 00:31:43.760 true 00:31:43.760 14:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b10d5b2-85eb-48dd-885a-31445116a602 00:31:43.760 14:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:44.020 14:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:44.020 14:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:44.020 14:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3623034 00:31:44.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:44.590 Nvme0n1 : 3.00 17919.00 70.00 0.00 0.00 0.00 0.00 0.00 00:31:44.590 [2024-10-14T12:45:25.317Z] =================================================================================================================== 00:31:44.590 [2024-10-14T12:45:25.317Z] Total : 17919.00 70.00 0.00 0.00 0.00 0.00 0.00 00:31:44.590 00:31:45.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:45.972 Nvme0n1 : 4.00 17951.25 70.12 0.00 0.00 0.00 0.00 0.00 00:31:45.972 [2024-10-14T12:45:26.699Z] =================================================================================================================== 00:31:45.972 [2024-10-14T12:45:26.699Z] Total : 17951.25 70.12 0.00 0.00 0.00 0.00 0.00 00:31:45.972 00:31:46.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:46.912 Nvme0n1 : 5.00 17983.60 70.25 0.00 0.00 0.00 0.00 0.00 00:31:46.912 [2024-10-14T12:45:27.639Z] =================================================================================================================== 00:31:46.912 [2024-10-14T12:45:27.639Z] Total : 17983.60 70.25 0.00 0.00 0.00 0.00 0.00 00:31:46.912 00:31:47.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:47.850 Nvme0n1 : 6.00 18004.83 70.33 0.00 0.00 0.00 0.00 0.00 00:31:47.850 [2024-10-14T12:45:28.577Z] =================================================================================================================== 00:31:47.850 [2024-10-14T12:45:28.577Z] Total : 18004.83 70.33 0.00 0.00 0.00 0.00 0.00 00:31:47.850 00:31:48.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.790 Nvme0n1 : 7.00 18013.43 70.36 0.00 0.00 0.00 0.00 0.00 00:31:48.790 [2024-10-14T12:45:29.517Z] =================================================================================================================== 00:31:48.790 [2024-10-14T12:45:29.517Z] Total : 18013.43 70.36 0.00 0.00 0.00 0.00 0.00 00:31:48.790 00:31:49.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:49.729 Nvme0n1 : 8.00 18023.75 70.41 0.00 0.00 0.00 0.00 0.00 00:31:49.729 [2024-10-14T12:45:30.456Z] =================================================================================================================== 00:31:49.729 [2024-10-14T12:45:30.456Z] Total : 18023.75 70.41 0.00 0.00 0.00 0.00 0.00 00:31:49.729 00:31:50.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.670 Nvme0n1 : 9.00 18033.44 70.44 0.00 0.00 0.00 0.00 0.00 00:31:50.670 [2024-10-14T12:45:31.397Z] =================================================================================================================== 00:31:50.670 [2024-10-14T12:45:31.397Z] Total : 18033.44 70.44 0.00 0.00 0.00 0.00 0.00 00:31:50.670 00:31:51.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:51.609 Nvme0n1 : 10.00 18041.40 70.47 0.00 0.00 0.00 0.00 0.00 00:31:51.609 [2024-10-14T12:45:32.336Z] =================================================================================================================== 00:31:51.609 [2024-10-14T12:45:32.336Z] Total : 18041.40 70.47 0.00 0.00 0.00 0.00 0.00 00:31:51.609 00:31:51.609 00:31:51.609 Latency(us) 00:31:51.609 [2024-10-14T12:45:32.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:51.609 Nvme0n1 : 10.01 18043.65 70.48 0.00 0.00 7090.69 1740.80 12724.91 00:31:51.609 [2024-10-14T12:45:32.336Z] =================================================================================================================== 00:31:51.609 [2024-10-14T12:45:32.337Z] Total : 18043.65 70.48 0.00 0.00 7090.69 1740.80 12724.91 00:31:51.610 { 00:31:51.610 "results": [ 00:31:51.610 { 00:31:51.610 "job": "Nvme0n1", 00:31:51.610 "core_mask": "0x2", 00:31:51.610 "workload": "randwrite", 00:31:51.610 "status": "finished", 00:31:51.610 "queue_depth": 128, 00:31:51.610 "io_size": 4096, 00:31:51.610 "runtime": 10.005848, 00:31:51.610 "iops": 18043.64807460597, 00:31:51.610 "mibps": 70.48300029142958, 00:31:51.610 "io_failed": 0, 00:31:51.610 "io_timeout": 0, 00:31:51.610 "avg_latency_us": 7090.689007691654, 00:31:51.610 "min_latency_us": 1740.8, 00:31:51.610 "max_latency_us": 12724.906666666666 00:31:51.610 } 00:31:51.610 ], 00:31:51.610 "core_count": 1 00:31:51.610 } 00:31:51.610 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3622894 00:31:51.610 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3622894 ']' 00:31:51.610 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3622894 00:31:51.610 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:31:51.610 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:51.610 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3622894 00:31:51.871 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:51.871 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:51.871 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3622894' 00:31:51.871 killing process with pid 3622894 00:31:51.871 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3622894 00:31:51.871 Received shutdown signal, test time was about 10.000000 seconds 00:31:51.871 00:31:51.871 Latency(us) 00:31:51.871 [2024-10-14T12:45:32.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.871 [2024-10-14T12:45:32.598Z] =================================================================================================================== 00:31:51.871 [2024-10-14T12:45:32.598Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:51.871 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3622894 00:31:51.871 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:52.131 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:52.391 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b10d5b2-85eb-48dd-885a-31445116a602 00:31:52.391 14:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3618812 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3618812 00:31:52.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3618812 Killed "${NVMF_APP[@]}" "$@" 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=3625060 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 3625060 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3625060 ']' 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:52.391 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:52.651 [2024-10-14 14:45:33.152676] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:52.651 [2024-10-14 14:45:33.153690] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:31:52.651 [2024-10-14 14:45:33.153731] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:52.651 [2024-10-14 14:45:33.221697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.651 [2024-10-14 14:45:33.256744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:52.651 [2024-10-14 14:45:33.256779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:52.651 [2024-10-14 14:45:33.256787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:52.651 [2024-10-14 14:45:33.256794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:52.651 [2024-10-14 14:45:33.256800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:52.651 [2024-10-14 14:45:33.257354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.651 [2024-10-14 14:45:33.311593] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:52.651 [2024-10-14 14:45:33.311852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:52.651 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:52.651 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:31:52.651 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:52.651 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:52.651 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:52.912 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:52.912 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:52.912 [2024-10-14 14:45:33.540688] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:52.912 [2024-10-14 14:45:33.540827] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:52.912 [2024-10-14 14:45:33.540859] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:52.912 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:52.912 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 66965bef-4bc5-4bb1-affb-d7d7a8216e58 00:31:52.912 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=66965bef-4bc5-4bb1-affb-d7d7a8216e58 00:31:52.912 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:52.912 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:31:52.912 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:52.912 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:52.912 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:53.173 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 66965bef-4bc5-4bb1-affb-d7d7a8216e58 -t 2000 00:31:53.173 [ 00:31:53.173 { 00:31:53.173 "name": "66965bef-4bc5-4bb1-affb-d7d7a8216e58", 00:31:53.173 "aliases": [ 00:31:53.173 "lvs/lvol" 00:31:53.173 ], 00:31:53.173 "product_name": "Logical Volume", 00:31:53.173 "block_size": 4096, 00:31:53.173 "num_blocks": 38912, 00:31:53.173 "uuid": "66965bef-4bc5-4bb1-affb-d7d7a8216e58", 00:31:53.173 "assigned_rate_limits": { 00:31:53.173 "rw_ios_per_sec": 0, 00:31:53.173 "rw_mbytes_per_sec": 0, 00:31:53.173 "r_mbytes_per_sec": 0, 00:31:53.173 "w_mbytes_per_sec": 0 00:31:53.173 }, 00:31:53.173 "claimed": false, 00:31:53.173 "zoned": false, 00:31:53.173 "supported_io_types": { 00:31:53.173 "read": true, 00:31:53.173 "write": true, 00:31:53.173 "unmap": true, 00:31:53.173 "flush": false, 00:31:53.173 "reset": true, 00:31:53.173 "nvme_admin": false, 00:31:53.173 "nvme_io": false, 00:31:53.173 "nvme_io_md": false, 00:31:53.173 "write_zeroes": true, 00:31:53.173 "zcopy": false, 00:31:53.173 "get_zone_info": false, 00:31:53.173 "zone_management": false, 00:31:53.173 "zone_append": false, 00:31:53.173 "compare": false, 00:31:53.173 "compare_and_write": false, 00:31:53.173 "abort": false, 00:31:53.173 "seek_hole": true, 00:31:53.173 "seek_data": true, 00:31:53.173 "copy": false, 00:31:53.173 "nvme_iov_md": false 00:31:53.173 }, 00:31:53.173 "driver_specific": { 00:31:53.173 "lvol": { 00:31:53.173 "lvol_store_uuid": "6b10d5b2-85eb-48dd-885a-31445116a602", 00:31:53.173 "base_bdev": "aio_bdev", 00:31:53.173 "thin_provision": false, 00:31:53.173 "num_allocated_clusters": 38, 00:31:53.173 "snapshot": false, 00:31:53.173 "clone": false, 00:31:53.173 "esnap_clone": false 00:31:53.173 } 00:31:53.173 } 00:31:53.173 } 00:31:53.173 ] 00:31:53.173 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:31:53.173 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b10d5b2-85eb-48dd-885a-31445116a602 00:31:53.173 14:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:53.434 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:53.434 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b10d5b2-85eb-48dd-885a-31445116a602 00:31:53.434 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:53.695 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:53.695 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:53.695 [2024-10-14 14:45:34.381886] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:53.695 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b10d5b2-85eb-48dd-885a-31445116a602 00:31:53.695 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:31:53.695 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b10d5b2-85eb-48dd-885a-31445116a602 00:31:53.695 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:53.695 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:53.695 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:53.695 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:53.695 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:53.695 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:53.695 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:53.695 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:53.695 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b10d5b2-85eb-48dd-885a-31445116a602 00:31:53.956 request: 00:31:53.956 { 00:31:53.956 "uuid": "6b10d5b2-85eb-48dd-885a-31445116a602", 00:31:53.956 "method": "bdev_lvol_get_lvstores", 00:31:53.956 "req_id": 1 00:31:53.956 } 00:31:53.956 Got JSON-RPC error response 00:31:53.956 response: 00:31:53.956 { 00:31:53.956 "code": -19, 00:31:53.956 "message": "No such device" 00:31:53.956 } 00:31:53.956 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:31:53.956 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:53.956 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:53.956 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:53.956 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:54.216 aio_bdev 00:31:54.216 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 66965bef-4bc5-4bb1-affb-d7d7a8216e58 00:31:54.216 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=66965bef-4bc5-4bb1-affb-d7d7a8216e58 00:31:54.216 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:54.216 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:31:54.216 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:54.216 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:54.216 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:54.216 14:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 66965bef-4bc5-4bb1-affb-d7d7a8216e58 -t 2000 00:31:54.476 [ 00:31:54.476 { 00:31:54.476 "name": "66965bef-4bc5-4bb1-affb-d7d7a8216e58", 00:31:54.476 "aliases": [ 00:31:54.476 "lvs/lvol" 00:31:54.476 ], 00:31:54.476 "product_name": "Logical Volume", 00:31:54.476 "block_size": 4096, 00:31:54.476 "num_blocks": 38912, 00:31:54.476 "uuid": "66965bef-4bc5-4bb1-affb-d7d7a8216e58", 00:31:54.476 "assigned_rate_limits": { 00:31:54.476 "rw_ios_per_sec": 0, 00:31:54.476 "rw_mbytes_per_sec": 0, 00:31:54.476 "r_mbytes_per_sec": 0, 00:31:54.476 "w_mbytes_per_sec": 0 00:31:54.476 }, 00:31:54.476 "claimed": false, 00:31:54.476 "zoned": false, 00:31:54.476 "supported_io_types": { 00:31:54.476 "read": true, 00:31:54.476 "write": true, 00:31:54.476 "unmap": true, 00:31:54.476 "flush": false, 00:31:54.476 "reset": true, 00:31:54.476 "nvme_admin": false, 00:31:54.476 "nvme_io": false, 00:31:54.476 "nvme_io_md": false, 00:31:54.476 "write_zeroes": true, 00:31:54.476 "zcopy": false, 00:31:54.476 "get_zone_info": false, 00:31:54.476 "zone_management": false, 00:31:54.476 "zone_append": false, 00:31:54.476 "compare": false, 00:31:54.476 "compare_and_write": false, 00:31:54.476 "abort": false, 00:31:54.476 "seek_hole": true, 00:31:54.476 "seek_data": true, 00:31:54.476 "copy": false, 00:31:54.476 "nvme_iov_md": false 00:31:54.476 }, 00:31:54.476 "driver_specific": { 00:31:54.476 "lvol": { 00:31:54.476 "lvol_store_uuid": "6b10d5b2-85eb-48dd-885a-31445116a602", 00:31:54.476 "base_bdev": "aio_bdev", 00:31:54.476 "thin_provision": false, 00:31:54.476 "num_allocated_clusters": 38, 00:31:54.476 "snapshot": false, 00:31:54.476 "clone": false, 00:31:54.476 "esnap_clone": false 00:31:54.476 } 00:31:54.476 } 00:31:54.476 } 00:31:54.476 ] 00:31:54.476 14:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:31:54.476 14:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b10d5b2-85eb-48dd-885a-31445116a602 00:31:54.476 14:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:54.736 14:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:54.736 14:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6b10d5b2-85eb-48dd-885a-31445116a602 00:31:54.736 14:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:54.736 14:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:54.737 14:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 66965bef-4bc5-4bb1-affb-d7d7a8216e58 00:31:54.996 14:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b10d5b2-85eb-48dd-885a-31445116a602 00:31:55.256 14:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:55.256 14:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:55.516 00:31:55.516 real 0m16.205s 00:31:55.516 user 0m34.451s 00:31:55.516 sys 0m2.807s 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:55.516 ************************************ 00:31:55.516 END TEST lvs_grow_dirty 00:31:55.516 ************************************ 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:55.516 nvmf_trace.0 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:55.516 rmmod nvme_tcp 00:31:55.516 rmmod nvme_fabrics 00:31:55.516 rmmod nvme_keyring 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 3625060 ']' 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 3625060 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3625060 ']' 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3625060 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:55.516 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3625060 00:31:55.517 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:55.517 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:55.517 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3625060' 00:31:55.517 killing process with pid 3625060 00:31:55.517 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3625060 00:31:55.517 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3625060 00:31:55.777 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:55.777 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:55.777 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:55.777 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:55.777 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:31:55.777 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:55.777 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:31:55.777 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:55.777 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:55.777 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.777 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.777 14:45:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:58.320 00:31:58.320 real 0m42.975s 00:31:58.320 user 0m52.497s 00:31:58.320 sys 0m10.176s 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:58.320 ************************************ 00:31:58.320 END TEST nvmf_lvs_grow 00:31:58.320 ************************************ 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:58.320 ************************************ 00:31:58.320 START TEST nvmf_bdev_io_wait 00:31:58.320 ************************************ 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:58.320 * Looking for test storage... 00:31:58.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:58.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.320 --rc genhtml_branch_coverage=1 00:31:58.320 --rc genhtml_function_coverage=1 00:31:58.320 --rc genhtml_legend=1 00:31:58.320 --rc geninfo_all_blocks=1 00:31:58.320 --rc geninfo_unexecuted_blocks=1 00:31:58.320 00:31:58.320 ' 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:58.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.320 --rc genhtml_branch_coverage=1 00:31:58.320 --rc genhtml_function_coverage=1 00:31:58.320 --rc genhtml_legend=1 00:31:58.320 --rc geninfo_all_blocks=1 00:31:58.320 --rc geninfo_unexecuted_blocks=1 00:31:58.320 00:31:58.320 ' 00:31:58.320 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:58.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.320 --rc genhtml_branch_coverage=1 00:31:58.320 --rc genhtml_function_coverage=1 00:31:58.320 --rc genhtml_legend=1 00:31:58.320 --rc geninfo_all_blocks=1 00:31:58.320 --rc geninfo_unexecuted_blocks=1 00:31:58.321 00:31:58.321 ' 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:58.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:58.321 --rc genhtml_branch_coverage=1 00:31:58.321 --rc genhtml_function_coverage=1 00:31:58.321 --rc genhtml_legend=1 00:31:58.321 --rc geninfo_all_blocks=1 00:31:58.321 --rc geninfo_unexecuted_blocks=1 00:31:58.321 00:31:58.321 ' 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:58.321 14:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:06.461 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:06.462 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:06.462 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:06.462 Found net devices under 0000:31:00.0: cvl_0_0 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:06.462 Found net devices under 0000:31:00.1: cvl_0_1 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.462 14:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:06.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:32:06.462 00:32:06.462 --- 10.0.0.2 ping statistics --- 00:32:06.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.462 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:32:06.462 00:32:06.462 --- 10.0.0.1 ping statistics --- 00:32:06.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.462 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=3629905 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 3629905 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3629905 ']' 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:06.462 14:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:06.462 [2024-10-14 14:45:46.352103] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:06.463 [2024-10-14 14:45:46.353469] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:32:06.463 [2024-10-14 14:45:46.353534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.463 [2024-10-14 14:45:46.430119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:06.463 [2024-10-14 14:45:46.475436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.463 [2024-10-14 14:45:46.475477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.463 [2024-10-14 14:45:46.475485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.463 [2024-10-14 14:45:46.475492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.463 [2024-10-14 14:45:46.475498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.463 [2024-10-14 14:45:46.477530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.463 [2024-10-14 14:45:46.477647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:06.463 [2024-10-14 14:45:46.477804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.463 [2024-10-14 14:45:46.477805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:06.463 [2024-10-14 14:45:46.478096] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:06.463 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:06.463 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:32:06.463 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:06.463 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:06.463 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:06.724 [2024-10-14 14:45:47.247869] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:06.724 [2024-10-14 14:45:47.248412] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:06.724 [2024-10-14 14:45:47.248965] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:06.724 [2024-10-14 14:45:47.249215] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:06.724 [2024-10-14 14:45:47.254269] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:06.724 Malloc0 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:06.724 [2024-10-14 14:45:47.306441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3630213 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3630215 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:06.724 { 00:32:06.724 "params": { 00:32:06.724 "name": "Nvme$subsystem", 00:32:06.724 "trtype": "$TEST_TRANSPORT", 00:32:06.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:06.724 "adrfam": "ipv4", 00:32:06.724 "trsvcid": "$NVMF_PORT", 00:32:06.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:06.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:06.724 "hdgst": ${hdgst:-false}, 00:32:06.724 "ddgst": ${ddgst:-false} 00:32:06.724 }, 00:32:06.724 "method": "bdev_nvme_attach_controller" 00:32:06.724 } 00:32:06.724 EOF 00:32:06.724 )") 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3630217 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3630220 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:06.724 { 00:32:06.724 "params": { 00:32:06.724 "name": "Nvme$subsystem", 00:32:06.724 "trtype": "$TEST_TRANSPORT", 00:32:06.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:06.724 "adrfam": "ipv4", 00:32:06.724 "trsvcid": "$NVMF_PORT", 00:32:06.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:06.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:06.724 "hdgst": ${hdgst:-false}, 00:32:06.724 "ddgst": ${ddgst:-false} 00:32:06.724 }, 00:32:06.724 "method": "bdev_nvme_attach_controller" 00:32:06.724 } 00:32:06.724 EOF 00:32:06.724 )") 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:06.724 { 00:32:06.724 "params": { 00:32:06.724 "name": "Nvme$subsystem", 00:32:06.724 "trtype": "$TEST_TRANSPORT", 00:32:06.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:06.724 "adrfam": "ipv4", 00:32:06.724 "trsvcid": "$NVMF_PORT", 00:32:06.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:06.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:06.724 "hdgst": ${hdgst:-false}, 00:32:06.724 "ddgst": ${ddgst:-false} 00:32:06.724 }, 00:32:06.724 "method": "bdev_nvme_attach_controller" 00:32:06.724 } 00:32:06.724 EOF 00:32:06.724 )") 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:06.724 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:06.724 { 00:32:06.724 "params": { 00:32:06.724 "name": "Nvme$subsystem", 00:32:06.725 "trtype": "$TEST_TRANSPORT", 00:32:06.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:06.725 "adrfam": "ipv4", 00:32:06.725 "trsvcid": "$NVMF_PORT", 00:32:06.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:06.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:06.725 "hdgst": ${hdgst:-false}, 00:32:06.725 "ddgst": ${ddgst:-false} 00:32:06.725 }, 00:32:06.725 "method": "bdev_nvme_attach_controller" 00:32:06.725 } 00:32:06.725 EOF 00:32:06.725 )") 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3630213 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:06.725 "params": { 00:32:06.725 "name": "Nvme1", 00:32:06.725 "trtype": "tcp", 00:32:06.725 "traddr": "10.0.0.2", 00:32:06.725 "adrfam": "ipv4", 00:32:06.725 "trsvcid": "4420", 00:32:06.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:06.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:06.725 "hdgst": false, 00:32:06.725 "ddgst": false 00:32:06.725 }, 00:32:06.725 "method": "bdev_nvme_attach_controller" 00:32:06.725 }' 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:06.725 "params": { 00:32:06.725 "name": "Nvme1", 00:32:06.725 "trtype": "tcp", 00:32:06.725 "traddr": "10.0.0.2", 00:32:06.725 "adrfam": "ipv4", 00:32:06.725 "trsvcid": "4420", 00:32:06.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:06.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:06.725 "hdgst": false, 00:32:06.725 "ddgst": false 00:32:06.725 }, 00:32:06.725 "method": "bdev_nvme_attach_controller" 00:32:06.725 }' 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:06.725 "params": { 00:32:06.725 "name": "Nvme1", 00:32:06.725 "trtype": "tcp", 00:32:06.725 "traddr": "10.0.0.2", 00:32:06.725 "adrfam": "ipv4", 00:32:06.725 "trsvcid": "4420", 00:32:06.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:06.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:06.725 "hdgst": false, 00:32:06.725 "ddgst": false 00:32:06.725 }, 00:32:06.725 "method": "bdev_nvme_attach_controller" 00:32:06.725 }' 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:32:06.725 14:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:06.725 "params": { 00:32:06.725 "name": "Nvme1", 00:32:06.725 "trtype": "tcp", 00:32:06.725 "traddr": "10.0.0.2", 00:32:06.725 "adrfam": "ipv4", 00:32:06.725 "trsvcid": "4420", 00:32:06.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:06.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:06.725 "hdgst": false, 00:32:06.725 "ddgst": false 00:32:06.725 }, 00:32:06.725 "method": "bdev_nvme_attach_controller" 00:32:06.725 }' 00:32:06.725 [2024-10-14 14:45:47.360966] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:32:06.725 [2024-10-14 14:45:47.360967] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:32:06.725 [2024-10-14 14:45:47.361019] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-14 14:45:47.361020] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:06.725 --proc-type=auto ] 00:32:06.725 [2024-10-14 14:45:47.361765] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:32:06.725 [2024-10-14 14:45:47.361810] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:06.725 [2024-10-14 14:45:47.365841] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:32:06.725 [2024-10-14 14:45:47.365886] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:06.985 [2024-10-14 14:45:47.507125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.985 [2024-10-14 14:45:47.535960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:06.985 [2024-10-14 14:45:47.565634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.985 [2024-10-14 14:45:47.594204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:06.985 [2024-10-14 14:45:47.623106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.985 [2024-10-14 14:45:47.652144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:06.985 [2024-10-14 14:45:47.672528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.985 [2024-10-14 14:45:47.700505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:07.245 Running I/O for 1 seconds... 00:32:07.245 Running I/O for 1 seconds... 00:32:07.245 Running I/O for 1 seconds... 00:32:07.245 Running I/O for 1 seconds... 00:32:08.185 13322.00 IOPS, 52.04 MiB/s 00:32:08.185 Latency(us) 00:32:08.185 [2024-10-14T12:45:48.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.185 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:08.185 Nvme1n1 : 1.01 13337.59 52.10 0.00 0.00 9557.66 2607.79 17039.36 00:32:08.185 [2024-10-14T12:45:48.912Z] =================================================================================================================== 00:32:08.185 [2024-10-14T12:45:48.912Z] Total : 13337.59 52.10 0.00 0.00 9557.66 2607.79 17039.36 00:32:08.185 182848.00 IOPS, 714.25 MiB/s 00:32:08.185 Latency(us) 00:32:08.185 [2024-10-14T12:45:48.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.185 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:08.185 Nvme1n1 : 1.00 182475.90 712.80 0.00 0.00 697.57 314.03 2034.35 00:32:08.185 [2024-10-14T12:45:48.912Z] =================================================================================================================== 00:32:08.185 [2024-10-14T12:45:48.913Z] Total : 182475.90 712.80 0.00 0.00 697.57 314.03 2034.35 00:32:08.186 12298.00 IOPS, 48.04 MiB/s 00:32:08.186 Latency(us) 00:32:08.186 [2024-10-14T12:45:48.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.186 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:08.186 Nvme1n1 : 1.01 12408.39 48.47 0.00 0.00 10289.44 3263.15 18350.08 00:32:08.186 [2024-10-14T12:45:48.913Z] =================================================================================================================== 00:32:08.186 [2024-10-14T12:45:48.913Z] Total : 12408.39 48.47 0.00 0.00 10289.44 3263.15 18350.08 00:32:08.446 12630.00 IOPS, 49.34 MiB/s 00:32:08.446 Latency(us) 00:32:08.446 [2024-10-14T12:45:49.173Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.446 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:08.446 Nvme1n1 : 1.01 12681.50 49.54 0.00 0.00 10061.11 4341.76 14417.92 00:32:08.446 [2024-10-14T12:45:49.173Z] =================================================================================================================== 00:32:08.446 [2024-10-14T12:45:49.173Z] Total : 12681.50 49.54 0.00 0.00 10061.11 4341.76 14417.92 00:32:08.446 14:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3630215 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3630217 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3630220 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:08.446 rmmod nvme_tcp 00:32:08.446 rmmod nvme_fabrics 00:32:08.446 rmmod nvme_keyring 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 3629905 ']' 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 3629905 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3629905 ']' 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3629905 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:08.446 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3629905 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3629905' 00:32:08.706 killing process with pid 3629905 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3629905 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3629905 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.706 14:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:11.254 00:32:11.254 real 0m12.856s 00:32:11.254 user 0m14.894s 00:32:11.254 sys 0m7.400s 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:11.254 ************************************ 00:32:11.254 END TEST nvmf_bdev_io_wait 00:32:11.254 ************************************ 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:11.254 ************************************ 00:32:11.254 START TEST nvmf_queue_depth 00:32:11.254 ************************************ 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:11.254 * Looking for test storage... 00:32:11.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:11.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.254 --rc genhtml_branch_coverage=1 00:32:11.254 --rc genhtml_function_coverage=1 00:32:11.254 --rc genhtml_legend=1 00:32:11.254 --rc geninfo_all_blocks=1 00:32:11.254 --rc geninfo_unexecuted_blocks=1 00:32:11.254 00:32:11.254 ' 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:11.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.254 --rc genhtml_branch_coverage=1 00:32:11.254 --rc genhtml_function_coverage=1 00:32:11.254 --rc genhtml_legend=1 00:32:11.254 --rc geninfo_all_blocks=1 00:32:11.254 --rc geninfo_unexecuted_blocks=1 00:32:11.254 00:32:11.254 ' 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:11.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.254 --rc genhtml_branch_coverage=1 00:32:11.254 --rc genhtml_function_coverage=1 00:32:11.254 --rc genhtml_legend=1 00:32:11.254 --rc geninfo_all_blocks=1 00:32:11.254 --rc geninfo_unexecuted_blocks=1 00:32:11.254 00:32:11.254 ' 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:11.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:11.254 --rc genhtml_branch_coverage=1 00:32:11.254 --rc genhtml_function_coverage=1 00:32:11.254 --rc genhtml_legend=1 00:32:11.254 --rc geninfo_all_blocks=1 00:32:11.254 --rc geninfo_unexecuted_blocks=1 00:32:11.254 00:32:11.254 ' 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.254 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:11.255 14:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:19.671 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:19.672 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:19.672 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:19.672 Found net devices under 0000:31:00.0: cvl_0_0 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:19.672 Found net devices under 0000:31:00.1: cvl_0_1 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:19.672 14:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:19.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:19.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:32:19.672 00:32:19.672 --- 10.0.0.2 ping statistics --- 00:32:19.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.672 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:19.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:19.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:32:19.672 00:32:19.672 --- 10.0.0.1 ping statistics --- 00:32:19.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.672 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=3634771 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 3634771 00:32:19.672 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:19.673 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3634771 ']' 00:32:19.673 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.673 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:19.673 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.673 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:19.673 14:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:19.673 [2024-10-14 14:45:59.328993] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:19.673 [2024-10-14 14:45:59.330161] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:32:19.673 [2024-10-14 14:45:59.330213] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:19.673 [2024-10-14 14:45:59.423648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.673 [2024-10-14 14:45:59.475174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:19.673 [2024-10-14 14:45:59.475227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:19.673 [2024-10-14 14:45:59.475235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:19.673 [2024-10-14 14:45:59.475242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:19.673 [2024-10-14 14:45:59.475248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:19.673 [2024-10-14 14:45:59.476074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:19.673 [2024-10-14 14:45:59.551711] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:19.673 [2024-10-14 14:45:59.551986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:19.673 [2024-10-14 14:46:00.192921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:19.673 Malloc0 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:19.673 [2024-10-14 14:46:00.269052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3634998 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3634998 /var/tmp/bdevperf.sock 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3634998 ']' 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:19.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:19.673 14:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:19.673 [2024-10-14 14:46:00.327611] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:32:19.673 [2024-10-14 14:46:00.327671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634998 ] 00:32:19.934 [2024-10-14 14:46:00.394449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.934 [2024-10-14 14:46:00.438620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.505 14:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:20.505 14:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:32:20.505 14:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:20.505 14:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:20.505 14:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:20.765 NVMe0n1 00:32:20.765 14:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:20.765 14:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:20.765 Running I/O for 10 seconds... 00:32:23.089 9216.00 IOPS, 36.00 MiB/s [2024-10-14T12:46:04.758Z] 9243.50 IOPS, 36.11 MiB/s [2024-10-14T12:46:05.700Z] 9726.33 IOPS, 37.99 MiB/s [2024-10-14T12:46:06.643Z] 10283.25 IOPS, 40.17 MiB/s [2024-10-14T12:46:07.585Z] 10673.00 IOPS, 41.69 MiB/s [2024-10-14T12:46:08.534Z] 10945.67 IOPS, 42.76 MiB/s [2024-10-14T12:46:09.474Z] 11132.43 IOPS, 43.49 MiB/s [2024-10-14T12:46:10.857Z] 11301.62 IOPS, 44.15 MiB/s [2024-10-14T12:46:11.428Z] 11433.33 IOPS, 44.66 MiB/s [2024-10-14T12:46:11.688Z] 11570.10 IOPS, 45.20 MiB/s 00:32:30.961 Latency(us) 00:32:30.961 [2024-10-14T12:46:11.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.961 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:30.961 Verification LBA range: start 0x0 length 0x4000 00:32:30.961 NVMe0n1 : 10.06 11589.32 45.27 0.00 0.00 88038.06 24357.55 65536.00 00:32:30.961 [2024-10-14T12:46:11.688Z] =================================================================================================================== 00:32:30.961 [2024-10-14T12:46:11.688Z] Total : 11589.32 45.27 0.00 0.00 88038.06 24357.55 65536.00 00:32:30.961 { 00:32:30.961 "results": [ 00:32:30.961 { 00:32:30.961 "job": "NVMe0n1", 00:32:30.961 "core_mask": "0x1", 00:32:30.961 "workload": "verify", 00:32:30.961 "status": "finished", 00:32:30.961 "verify_range": { 00:32:30.961 "start": 0, 00:32:30.961 "length": 16384 00:32:30.961 }, 00:32:30.961 "queue_depth": 1024, 00:32:30.961 "io_size": 4096, 00:32:30.961 "runtime": 10.060728, 00:32:30.961 "iops": 11589.320375225332, 00:32:30.961 "mibps": 45.27078271572395, 00:32:30.961 "io_failed": 0, 00:32:30.961 "io_timeout": 0, 00:32:30.961 "avg_latency_us": 88038.06104376614, 00:32:30.961 "min_latency_us": 24357.546666666665, 00:32:30.961 "max_latency_us": 65536.0 00:32:30.961 } 00:32:30.961 ], 00:32:30.961 "core_count": 1 00:32:30.961 } 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3634998 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3634998 ']' 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3634998 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3634998 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3634998' 00:32:30.961 killing process with pid 3634998 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3634998 00:32:30.961 Received shutdown signal, test time was about 10.000000 seconds 00:32:30.961 00:32:30.961 Latency(us) 00:32:30.961 [2024-10-14T12:46:11.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.961 [2024-10-14T12:46:11.688Z] =================================================================================================================== 00:32:30.961 [2024-10-14T12:46:11.688Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3634998 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:30.961 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:31.222 rmmod nvme_tcp 00:32:31.222 rmmod nvme_fabrics 00:32:31.222 rmmod nvme_keyring 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 3634771 ']' 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 3634771 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3634771 ']' 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3634771 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3634771 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3634771' 00:32:31.222 killing process with pid 3634771 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3634771 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3634771 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.222 14:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:33.769 00:32:33.769 real 0m22.561s 00:32:33.769 user 0m24.786s 00:32:33.769 sys 0m7.390s 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:33.769 ************************************ 00:32:33.769 END TEST nvmf_queue_depth 00:32:33.769 ************************************ 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:33.769 ************************************ 00:32:33.769 START TEST nvmf_target_multipath 00:32:33.769 ************************************ 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:33.769 * Looking for test storage... 00:32:33.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:33.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.769 --rc genhtml_branch_coverage=1 00:32:33.769 --rc genhtml_function_coverage=1 00:32:33.769 --rc genhtml_legend=1 00:32:33.769 --rc geninfo_all_blocks=1 00:32:33.769 --rc geninfo_unexecuted_blocks=1 00:32:33.769 00:32:33.769 ' 00:32:33.769 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:33.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.769 --rc genhtml_branch_coverage=1 00:32:33.769 --rc genhtml_function_coverage=1 00:32:33.769 --rc genhtml_legend=1 00:32:33.769 --rc geninfo_all_blocks=1 00:32:33.769 --rc geninfo_unexecuted_blocks=1 00:32:33.769 00:32:33.770 ' 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:33.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.770 --rc genhtml_branch_coverage=1 00:32:33.770 --rc genhtml_function_coverage=1 00:32:33.770 --rc genhtml_legend=1 00:32:33.770 --rc geninfo_all_blocks=1 00:32:33.770 --rc geninfo_unexecuted_blocks=1 00:32:33.770 00:32:33.770 ' 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:33.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:33.770 --rc genhtml_branch_coverage=1 00:32:33.770 --rc genhtml_function_coverage=1 00:32:33.770 --rc genhtml_legend=1 00:32:33.770 --rc geninfo_all_blocks=1 00:32:33.770 --rc geninfo_unexecuted_blocks=1 00:32:33.770 00:32:33.770 ' 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:33.770 14:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:41.921 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:41.921 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:41.921 Found net devices under 0000:31:00.0: cvl_0_0 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:41.921 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:41.922 Found net devices under 0000:31:00.1: cvl_0_1 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:41.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:41.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:32:41.922 00:32:41.922 --- 10.0.0.2 ping statistics --- 00:32:41.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.922 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:41.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:41.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:32:41.922 00:32:41.922 --- 10.0.0.1 ping statistics --- 00:32:41.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:41.922 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:41.922 only one NIC for nvmf test 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:41.922 rmmod nvme_tcp 00:32:41.922 rmmod nvme_fabrics 00:32:41.922 rmmod nvme_keyring 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.922 14:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:43.308 00:32:43.308 real 0m9.810s 00:32:43.308 user 0m2.193s 00:32:43.308 sys 0m5.551s 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:43.308 ************************************ 00:32:43.308 END TEST nvmf_target_multipath 00:32:43.308 ************************************ 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:43.308 ************************************ 00:32:43.308 START TEST nvmf_zcopy 00:32:43.308 ************************************ 00:32:43.308 14:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:43.570 * Looking for test storage... 00:32:43.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:43.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.570 --rc genhtml_branch_coverage=1 00:32:43.570 --rc genhtml_function_coverage=1 00:32:43.570 --rc genhtml_legend=1 00:32:43.570 --rc geninfo_all_blocks=1 00:32:43.570 --rc geninfo_unexecuted_blocks=1 00:32:43.570 00:32:43.570 ' 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:43.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.570 --rc genhtml_branch_coverage=1 00:32:43.570 --rc genhtml_function_coverage=1 00:32:43.570 --rc genhtml_legend=1 00:32:43.570 --rc geninfo_all_blocks=1 00:32:43.570 --rc geninfo_unexecuted_blocks=1 00:32:43.570 00:32:43.570 ' 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:43.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.570 --rc genhtml_branch_coverage=1 00:32:43.570 --rc genhtml_function_coverage=1 00:32:43.570 --rc genhtml_legend=1 00:32:43.570 --rc geninfo_all_blocks=1 00:32:43.570 --rc geninfo_unexecuted_blocks=1 00:32:43.570 00:32:43.570 ' 00:32:43.570 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:43.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.570 --rc genhtml_branch_coverage=1 00:32:43.570 --rc genhtml_function_coverage=1 00:32:43.570 --rc genhtml_legend=1 00:32:43.570 --rc geninfo_all_blocks=1 00:32:43.570 --rc geninfo_unexecuted_blocks=1 00:32:43.570 00:32:43.570 ' 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:43.571 14:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:51.715 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:51.715 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:51.715 Found net devices under 0000:31:00.0: cvl_0_0 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:51.715 Found net devices under 0000:31:00.1: cvl_0_1 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:51.715 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:51.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:51.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:32:51.716 00:32:51.716 --- 10.0.0.2 ping statistics --- 00:32:51.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.716 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:51.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:51.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:32:51.716 00:32:51.716 --- 10.0.0.1 ping statistics --- 00:32:51.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.716 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=3645502 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 3645502 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3645502 ']' 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:51.716 14:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.716 [2024-10-14 14:46:31.711909] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:51.716 [2024-10-14 14:46:31.712918] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:32:51.716 [2024-10-14 14:46:31.712958] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.716 [2024-10-14 14:46:31.798586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.716 [2024-10-14 14:46:31.843957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.716 [2024-10-14 14:46:31.844007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.716 [2024-10-14 14:46:31.844015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.716 [2024-10-14 14:46:31.844022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.716 [2024-10-14 14:46:31.844028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.716 [2024-10-14 14:46:31.844759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.716 [2024-10-14 14:46:31.915744] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:51.716 [2024-10-14 14:46:31.916027] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.978 [2024-10-14 14:46:32.557623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.978 [2024-10-14 14:46:32.585908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.978 malloc0 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:51.978 { 00:32:51.978 "params": { 00:32:51.978 "name": "Nvme$subsystem", 00:32:51.978 "trtype": "$TEST_TRANSPORT", 00:32:51.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:51.978 "adrfam": "ipv4", 00:32:51.978 "trsvcid": "$NVMF_PORT", 00:32:51.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:51.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:51.978 "hdgst": ${hdgst:-false}, 00:32:51.978 "ddgst": ${ddgst:-false} 00:32:51.978 }, 00:32:51.978 "method": "bdev_nvme_attach_controller" 00:32:51.978 } 00:32:51.978 EOF 00:32:51.978 )") 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:32:51.978 14:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:51.978 "params": { 00:32:51.978 "name": "Nvme1", 00:32:51.978 "trtype": "tcp", 00:32:51.978 "traddr": "10.0.0.2", 00:32:51.978 "adrfam": "ipv4", 00:32:51.978 "trsvcid": "4420", 00:32:51.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:51.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:51.978 "hdgst": false, 00:32:51.978 "ddgst": false 00:32:51.978 }, 00:32:51.978 "method": "bdev_nvme_attach_controller" 00:32:51.978 }' 00:32:51.978 [2024-10-14 14:46:32.698287] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:32:51.978 [2024-10-14 14:46:32.698349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645802 ] 00:32:52.239 [2024-10-14 14:46:32.764114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.239 [2024-10-14 14:46:32.806726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.499 Running I/O for 10 seconds... 00:32:54.381 6546.00 IOPS, 51.14 MiB/s [2024-10-14T12:46:36.050Z] 6593.50 IOPS, 51.51 MiB/s [2024-10-14T12:46:37.433Z] 6605.33 IOPS, 51.60 MiB/s [2024-10-14T12:46:38.003Z] 6611.25 IOPS, 51.65 MiB/s [2024-10-14T12:46:39.385Z] 6614.00 IOPS, 51.67 MiB/s [2024-10-14T12:46:40.325Z] 6617.00 IOPS, 51.70 MiB/s [2024-10-14T12:46:41.266Z] 6908.00 IOPS, 53.97 MiB/s [2024-10-14T12:46:42.207Z] 7239.12 IOPS, 56.56 MiB/s [2024-10-14T12:46:43.148Z] 7496.56 IOPS, 58.57 MiB/s [2024-10-14T12:46:43.148Z] 7703.10 IOPS, 60.18 MiB/s 00:33:02.421 Latency(us) 00:33:02.421 [2024-10-14T12:46:43.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.421 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:02.421 Verification LBA range: start 0x0 length 0x1000 00:33:02.421 Nvme1n1 : 10.01 7706.27 60.21 0.00 0.00 16557.18 2252.80 25995.95 00:33:02.421 [2024-10-14T12:46:43.148Z] =================================================================================================================== 00:33:02.421 [2024-10-14T12:46:43.148Z] Total : 7706.27 60.21 0.00 0.00 16557.18 2252.80 25995.95 00:33:02.421 14:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3647810 00:33:02.421 14:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:02.421 14:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:02.421 14:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:02.421 14:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:02.421 14:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:33:02.421 14:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:33:02.421 14:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:02.421 14:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:02.421 { 00:33:02.421 "params": { 00:33:02.421 "name": "Nvme$subsystem", 00:33:02.421 "trtype": "$TEST_TRANSPORT", 00:33:02.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:02.421 "adrfam": "ipv4", 00:33:02.421 "trsvcid": "$NVMF_PORT", 00:33:02.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:02.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:02.421 "hdgst": ${hdgst:-false}, 00:33:02.421 "ddgst": ${ddgst:-false} 00:33:02.421 }, 00:33:02.421 "method": "bdev_nvme_attach_controller" 00:33:02.421 } 00:33:02.421 EOF 00:33:02.421 )") 00:33:02.421 [2024-10-14 14:46:43.141157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.421 [2024-10-14 14:46:43.141185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.421 14:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:33:02.421 14:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:33:02.682 14:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:33:02.682 14:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:02.682 "params": { 00:33:02.682 "name": "Nvme1", 00:33:02.682 "trtype": "tcp", 00:33:02.682 "traddr": "10.0.0.2", 00:33:02.682 "adrfam": "ipv4", 00:33:02.682 "trsvcid": "4420", 00:33:02.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:02.682 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:02.682 "hdgst": false, 00:33:02.682 "ddgst": false 00:33:02.682 }, 00:33:02.682 "method": "bdev_nvme_attach_controller" 00:33:02.682 }' 00:33:02.682 [2024-10-14 14:46:43.153122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.153130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.165119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.165126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.177119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.177127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.188075] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:33:02.682 [2024-10-14 14:46:43.188132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647810 ] 00:33:02.682 [2024-10-14 14:46:43.189119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.189127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.201119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.201127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.213119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.213125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.225119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.225125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.237118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.237125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.249117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.249124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.249167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.682 [2024-10-14 14:46:43.261120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.261130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.273119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.273126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.284709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.682 [2024-10-14 14:46:43.285119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.285126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.297127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.297136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.309126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.309137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.321122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.321133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.333120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.333128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.345127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.345135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.357128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.357146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.369121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.369130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.381121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.381131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.393119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.393128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.682 [2024-10-14 14:46:43.405120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.682 [2024-10-14 14:46:43.405127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.417120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.417129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.429118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.429126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.441119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.441128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.453118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.453125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.465117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.465124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.477118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.477125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.489119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.489128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.501118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.501124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.513118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.513124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.525119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.525126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.537126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.537140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 Running I/O for 5 seconds... 00:33:02.943 [2024-10-14 14:46:43.553257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.553273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.565841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.565857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.580651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.580668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.593454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.593469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.608328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.608344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.621013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.621028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.633095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.633110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.646144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.646158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.659993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.660008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.943 [2024-10-14 14:46:43.672915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.943 [2024-10-14 14:46:43.672929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.685399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.685413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.699756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.699771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.712635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.712650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.725654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.725668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.740398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.740412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.753191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.753205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.766139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.766154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.780471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.780485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.792857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.792872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.805816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.805830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.820244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.820259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.833554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.833568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.848440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.848455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.861071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.861086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.873430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.873444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.888260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.888274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.901163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.901178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.913546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.913560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.204 [2024-10-14 14:46:43.928264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.204 [2024-10-14 14:46:43.928278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.465 [2024-10-14 14:46:43.941293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.465 [2024-10-14 14:46:43.941307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.465 [2024-10-14 14:46:43.956468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.465 [2024-10-14 14:46:43.956483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.465 [2024-10-14 14:46:43.969457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.465 [2024-10-14 14:46:43.969472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.465 [2024-10-14 14:46:43.984364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.465 [2024-10-14 14:46:43.984379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.465 [2024-10-14 14:46:43.997332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.465 [2024-10-14 14:46:43.997347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.465 [2024-10-14 14:46:44.008859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.465 [2024-10-14 14:46:44.008874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.465 [2024-10-14 14:46:44.022134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.465 [2024-10-14 14:46:44.022149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.465 [2024-10-14 14:46:44.036648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.465 [2024-10-14 14:46:44.036663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.465 [2024-10-14 14:46:44.049648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.465 [2024-10-14 14:46:44.049663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.465 [2024-10-14 14:46:44.064372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.465 [2024-10-14 14:46:44.064387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.465 [2024-10-14 14:46:44.076948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.466 [2024-10-14 14:46:44.076963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.466 [2024-10-14 14:46:44.089211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.466 [2024-10-14 14:46:44.089233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.466 [2024-10-14 14:46:44.101785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.466 [2024-10-14 14:46:44.101801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.466 [2024-10-14 14:46:44.116421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.466 [2024-10-14 14:46:44.116436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.466 [2024-10-14 14:46:44.129004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.466 [2024-10-14 14:46:44.129019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.466 [2024-10-14 14:46:44.141360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.466 [2024-10-14 14:46:44.141374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.466 [2024-10-14 14:46:44.156931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.466 [2024-10-14 14:46:44.156946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.466 [2024-10-14 14:46:44.169591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.466 [2024-10-14 14:46:44.169606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.466 [2024-10-14 14:46:44.184580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.466 [2024-10-14 14:46:44.184595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.726 [2024-10-14 14:46:44.197420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.726 [2024-10-14 14:46:44.197434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.726 [2024-10-14 14:46:44.212588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.726 [2024-10-14 14:46:44.212603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.726 [2024-10-14 14:46:44.225926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.726 [2024-10-14 14:46:44.225941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.726 [2024-10-14 14:46:44.240294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.726 [2024-10-14 14:46:44.240309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.726 [2024-10-14 14:46:44.253541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.726 [2024-10-14 14:46:44.253556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.726 [2024-10-14 14:46:44.268475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.726 [2024-10-14 14:46:44.268491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.726 [2024-10-14 14:46:44.281186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.726 [2024-10-14 14:46:44.281202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.726 [2024-10-14 14:46:44.293274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.726 [2024-10-14 14:46:44.293288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.726 [2024-10-14 14:46:44.308383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.726 [2024-10-14 14:46:44.308398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.726 [2024-10-14 14:46:44.321252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.726 [2024-10-14 14:46:44.321267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.727 [2024-10-14 14:46:44.333650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.727 [2024-10-14 14:46:44.333664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.727 [2024-10-14 14:46:44.348427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.727 [2024-10-14 14:46:44.348447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.727 [2024-10-14 14:46:44.361380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.727 [2024-10-14 14:46:44.361395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.727 [2024-10-14 14:46:44.376636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.727 [2024-10-14 14:46:44.376651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.727 [2024-10-14 14:46:44.389526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.727 [2024-10-14 14:46:44.389541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.727 [2024-10-14 14:46:44.404578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.727 [2024-10-14 14:46:44.404593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.727 [2024-10-14 14:46:44.417270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.727 [2024-10-14 14:46:44.417285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.727 [2024-10-14 14:46:44.429132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.727 [2024-10-14 14:46:44.429147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.727 [2024-10-14 14:46:44.441622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.727 [2024-10-14 14:46:44.441636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.456879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.456895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.469515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.469529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.484249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.484265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.497172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.497187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.510313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.510328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.524684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.524699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.537518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.537532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.552057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.552077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 18787.00 IOPS, 146.77 MiB/s [2024-10-14T12:46:44.715Z] [2024-10-14 14:46:44.565296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.565311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.577865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.577881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.591959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.591974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.604577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.604597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.617342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.617356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.632230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.632246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.645246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.645261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.657558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.657573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.672375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.672390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.685308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.685323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.700279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.700294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.988 [2024-10-14 14:46:44.713065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.988 [2024-10-14 14:46:44.713081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.725710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.725726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.740153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.740169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.752770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.752786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.765550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.765565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.780561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.780576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.793456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.793471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.808125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.808140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.821287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.821303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.833720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.833734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.848762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.848777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.861679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.861694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.876236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.876251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.889058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.889077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.901634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.901649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.916740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.916755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.929812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.248 [2024-10-14 14:46:44.929827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.248 [2024-10-14 14:46:44.944632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.249 [2024-10-14 14:46:44.944647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.249 [2024-10-14 14:46:44.957469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.249 [2024-10-14 14:46:44.957483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.249 [2024-10-14 14:46:44.972471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.249 [2024-10-14 14:46:44.972485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:44.985096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:44.985111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:44.997333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:44.997347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.012429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.012444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.025370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.025385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.040451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.040466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.053420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.053434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.068519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.068535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.081770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.081784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.096754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.096769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.110159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.110174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.124595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.124609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.137228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.137242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.148719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.148734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.161274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.161288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.176906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.176922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.190004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.190018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.204598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.204613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.217007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.217022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.508 [2024-10-14 14:46:45.229253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.508 [2024-10-14 14:46:45.229267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.244169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.244185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.257451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.257466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.272391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.272406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.285501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.285515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.300444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.300459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.313484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.313499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.328801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.328815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.341577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.341591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.356269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.356284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.369366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.369380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.384448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.384463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.397331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.397345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.409309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.409323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.424590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.424605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.437525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.437540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.452432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.452447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.465089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.465103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.477416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.477430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.768 [2024-10-14 14:46:45.492741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.768 [2024-10-14 14:46:45.492755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.505535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.505550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.520226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.520241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.533748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.533763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.548244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.548259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 18827.50 IOPS, 147.09 MiB/s [2024-10-14T12:46:45.756Z] [2024-10-14 14:46:45.561395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.561409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.576565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.576580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.589120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.589134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.600874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.600889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.614003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.614018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.628731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.628750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.641538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.641552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.656246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.656261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.669132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.669147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.682011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.682025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.696077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.696091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.709031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.709046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.721646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.721660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.736655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.736670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.029 [2024-10-14 14:46:45.749276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.029 [2024-10-14 14:46:45.749290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.764886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.764901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.778123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.778137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.792667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.792681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.805785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.805800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.820426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.820441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.833312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.833326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.848146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.848161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.860981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.860996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.873649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.873664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.888262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.888282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.901433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.901448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.916205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.916220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.928901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.928916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.941298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.941312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.956526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.956541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.969895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.969910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.984553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.984568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:45.997345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:45.997359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.290 [2024-10-14 14:46:46.012293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.290 [2024-10-14 14:46:46.012309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.550 [2024-10-14 14:46:46.025181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.025197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.038019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.038034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.052190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.052205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.065011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.065027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.077564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.077579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.092253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.092267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.105479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.105493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.120459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.120473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.133070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.133087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.145428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.145447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.160259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.160274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.173493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.173508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.188593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.188608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.201745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.201759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.216370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.216385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.229358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.229373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.241197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.241213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.254051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.254071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.551 [2024-10-14 14:46:46.268401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.551 [2024-10-14 14:46:46.268416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.280850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.280866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.293772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.293787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.308617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.308632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.321526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.321541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.336111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.336127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.348895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.348910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.361722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.361737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.376438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.376453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.389948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.389963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.403834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.403853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.416567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.416582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.429878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.429892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.444682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.444697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.457397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.457411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.472287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.472303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.485187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.485202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.498445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.498459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.512117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.512132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.525210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.525225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.812 [2024-10-14 14:46:46.537911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.812 [2024-10-14 14:46:46.537926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.552874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.552890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 18815.33 IOPS, 146.99 MiB/s [2024-10-14T12:46:46.800Z] [2024-10-14 14:46:46.565331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.565346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.578038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.578053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.592423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.592439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.605442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.605456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.620477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.620492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.633678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.633693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.648267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.648282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.661709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.661724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.675879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.675894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.688453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.688468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.701275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.701290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.712830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.712845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.725877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.725891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.740372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.740386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.753625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.753640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.768418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.768433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.781719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.781733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.073 [2024-10-14 14:46:46.796596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.073 [2024-10-14 14:46:46.796611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:46.809219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:46.809234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:46.821803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:46.821817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:46.836349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:46.836363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:46.848774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:46.848788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:46.861959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:46.861973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:46.876971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:46.876986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:46.889849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:46.889863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:46.904817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:46.904832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:46.917663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:46.917678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:46.932336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:46.932351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:46.945342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:46.945357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:46.960377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:46.960392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:46.972885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:46.972899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:46.985330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:46.985344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:47.000494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:47.000509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:47.013891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:47.013906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:47.028791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:47.028806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:47.041460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:47.041474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-10-14 14:46:47.056409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-10-14 14:46:47.056424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.069436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.069451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.084839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.084854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.097553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.097568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.112550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.112565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.124973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.124988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.137471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.137485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.152837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.152852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.165481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.165495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.180220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.180234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.192779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.192794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.204841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.204856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.218255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.218269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.233043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.233058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.245936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.245950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.260070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.260085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.272677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.272692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.285730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.285745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.300336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.300350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.596 [2024-10-14 14:46:47.313555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.596 [2024-10-14 14:46:47.313570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.857 [2024-10-14 14:46:47.328884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.857 [2024-10-14 14:46:47.328900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.857 [2024-10-14 14:46:47.341747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.857 [2024-10-14 14:46:47.341762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.857 [2024-10-14 14:46:47.356438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.857 [2024-10-14 14:46:47.356453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.857 [2024-10-14 14:46:47.369439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.857 [2024-10-14 14:46:47.369453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.384841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.384856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.397300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.397314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.412203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.412219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.425343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.425360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.440644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.440659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.453331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.453345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.468183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.468198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.480728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.480743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.493303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.493318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.505916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.505930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.521050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.521069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.533701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.533716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.547756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.547771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 18834.00 IOPS, 147.14 MiB/s [2024-10-14T12:46:47.585Z] [2024-10-14 14:46:47.560801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.560817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.573001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.573016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.858 [2024-10-14 14:46:47.585928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.858 [2024-10-14 14:46:47.585943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.119 [2024-10-14 14:46:47.600651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.119 [2024-10-14 14:46:47.600666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.119 [2024-10-14 14:46:47.613460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.119 [2024-10-14 14:46:47.613474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.119 [2024-10-14 14:46:47.628590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.119 [2024-10-14 14:46:47.628605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.119 [2024-10-14 14:46:47.641349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.119 [2024-10-14 14:46:47.641363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.119 [2024-10-14 14:46:47.656483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.119 [2024-10-14 14:46:47.656498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.119 [2024-10-14 14:46:47.669306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.119 [2024-10-14 14:46:47.669320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.119 [2024-10-14 14:46:47.684369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.119 [2024-10-14 14:46:47.684388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.119 [2024-10-14 14:46:47.697322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.119 [2024-10-14 14:46:47.697337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.119 [2024-10-14 14:46:47.709658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.119 [2024-10-14 14:46:47.709672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.119 [2024-10-14 14:46:47.724548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.119 [2024-10-14 14:46:47.724563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.119 [2024-10-14 14:46:47.737753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.120 [2024-10-14 14:46:47.737768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.120 [2024-10-14 14:46:47.752259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.120 [2024-10-14 14:46:47.752275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.120 [2024-10-14 14:46:47.765182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.120 [2024-10-14 14:46:47.765197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.120 [2024-10-14 14:46:47.778287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.120 [2024-10-14 14:46:47.778303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.120 [2024-10-14 14:46:47.793131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.120 [2024-10-14 14:46:47.793146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.120 [2024-10-14 14:46:47.805825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.120 [2024-10-14 14:46:47.805840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.120 [2024-10-14 14:46:47.819876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.120 [2024-10-14 14:46:47.819891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.120 [2024-10-14 14:46:47.832813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.120 [2024-10-14 14:46:47.832828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.120 [2024-10-14 14:46:47.845243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.120 [2024-10-14 14:46:47.845257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:47.860095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:47.860111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:47.873456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:47.873471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:47.888596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:47.888612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:47.901399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:47.901414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:47.916079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:47.916094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:47.929456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:47.929471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:47.944388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:47.944410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:47.957274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:47.957289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:47.969264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:47.969278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:47.984683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:47.984699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:47.997277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:47.997292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:48.012105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:48.012120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:48.024986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:48.025001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:48.036861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:48.036876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:48.050042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:48.050058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:48.063578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:48.063593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:48.077007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:48.077021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:48.089171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:48.089187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.383 [2024-10-14 14:46:48.101992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.383 [2024-10-14 14:46:48.102007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.115809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.115824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.129124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.129139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.140754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.140769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.153915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.153929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.168501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.168516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.181089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.181104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.192702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.192717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.205452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.205466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.219813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.219828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.233137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.233152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.245734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.245748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.260548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.260563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.273284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.273299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.288391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.288407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.301039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.301054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.312781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.312796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.325500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.325515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.340448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.340463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.353539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.353554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.644 [2024-10-14 14:46:48.368258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.644 [2024-10-14 14:46:48.368273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.381215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.381230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.393712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.393727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.408799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.408814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.421716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.421731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.436580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.436596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.449146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.449161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.461116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.461131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.473594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.473609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.488425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.488439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.501403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.501417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.516004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.516019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.528876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.528890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.541478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.541492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.556114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.556129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 18848.60 IOPS, 147.25 MiB/s [2024-10-14T12:46:48.633Z] [2024-10-14 14:46:48.568317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.568332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 00:33:07.906 Latency(us) 00:33:07.906 [2024-10-14T12:46:48.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.906 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:07.906 Nvme1n1 : 5.01 18851.02 147.27 0.00 0.00 6782.59 2580.48 12014.93 00:33:07.906 [2024-10-14T12:46:48.633Z] =================================================================================================================== 00:33:07.906 [2024-10-14T12:46:48.633Z] Total : 18851.02 147.27 0.00 0.00 6782.59 2580.48 12014.93 00:33:07.906 [2024-10-14 14:46:48.577123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.577136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.589128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.589141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.601126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.601139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.613127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.613139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.906 [2024-10-14 14:46:48.625122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.906 [2024-10-14 14:46:48.625132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-14 14:46:48.637120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-14 14:46:48.637135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-14 14:46:48.649132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-14 14:46:48.649139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-14 14:46:48.661122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-14 14:46:48.661132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-14 14:46:48.673121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-14 14:46:48.673129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 [2024-10-14 14:46:48.685119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:08.167 [2024-10-14 14:46:48.685126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:08.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3647810) - No such process 00:33:08.167 14:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3647810 00:33:08.167 14:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:08.167 14:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.167 14:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:08.167 14:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.167 14:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:08.167 14:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.167 14:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:08.167 delay0 00:33:08.167 14:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.167 14:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:08.167 14:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:08.167 14:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:08.167 14:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:08.167 14:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:08.167 [2024-10-14 14:46:48.785431] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:14.753 [2024-10-14 14:46:55.043557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf1520 is same with the state(6) to be set 00:33:14.753 Initializing NVMe Controllers 00:33:14.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:14.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:14.753 Initialization complete. Launching workers. 00:33:14.753 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1412 00:33:14.753 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1699, failed to submit 33 00:33:14.753 success 1501, unsuccessful 198, failed 0 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:14.753 rmmod nvme_tcp 00:33:14.753 rmmod nvme_fabrics 00:33:14.753 rmmod nvme_keyring 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 3645502 ']' 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 3645502 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3645502 ']' 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3645502 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3645502 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3645502' 00:33:14.753 killing process with pid 3645502 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3645502 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3645502 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:14.753 14:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.667 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:16.667 00:33:16.667 real 0m33.397s 00:33:16.667 user 0m42.666s 00:33:16.667 sys 0m11.799s 00:33:16.667 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:16.667 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:16.667 ************************************ 00:33:16.667 END TEST nvmf_zcopy 00:33:16.667 ************************************ 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:16.929 ************************************ 00:33:16.929 START TEST nvmf_nmic 00:33:16.929 ************************************ 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:16.929 * Looking for test storage... 00:33:16.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:16.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.929 --rc genhtml_branch_coverage=1 00:33:16.929 --rc genhtml_function_coverage=1 00:33:16.929 --rc genhtml_legend=1 00:33:16.929 --rc geninfo_all_blocks=1 00:33:16.929 --rc geninfo_unexecuted_blocks=1 00:33:16.929 00:33:16.929 ' 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:16.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.929 --rc genhtml_branch_coverage=1 00:33:16.929 --rc genhtml_function_coverage=1 00:33:16.929 --rc genhtml_legend=1 00:33:16.929 --rc geninfo_all_blocks=1 00:33:16.929 --rc geninfo_unexecuted_blocks=1 00:33:16.929 00:33:16.929 ' 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:16.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.929 --rc genhtml_branch_coverage=1 00:33:16.929 --rc genhtml_function_coverage=1 00:33:16.929 --rc genhtml_legend=1 00:33:16.929 --rc geninfo_all_blocks=1 00:33:16.929 --rc geninfo_unexecuted_blocks=1 00:33:16.929 00:33:16.929 ' 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:16.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.929 --rc genhtml_branch_coverage=1 00:33:16.929 --rc genhtml_function_coverage=1 00:33:16.929 --rc genhtml_legend=1 00:33:16.929 --rc geninfo_all_blocks=1 00:33:16.929 --rc geninfo_unexecuted_blocks=1 00:33:16.929 00:33:16.929 ' 00:33:16.929 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.191 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:17.192 14:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:25.339 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:25.339 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:25.339 Found net devices under 0000:31:00.0: cvl_0_0 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:25.339 Found net devices under 0000:31:00.1: cvl_0_1 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:25.339 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:25.340 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:25.340 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:25.340 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:25.340 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:25.340 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:25.340 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:25.340 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:25.340 14:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:25.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:33:25.340 00:33:25.340 --- 10.0.0.2 ping statistics --- 00:33:25.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.340 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:25.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:33:25.340 00:33:25.340 --- 10.0.0.1 ping statistics --- 00:33:25.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.340 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=3654205 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 3654205 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3654205 ']' 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:25.340 [2024-10-14 14:47:05.125600] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:25.340 [2024-10-14 14:47:05.126702] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:33:25.340 [2024-10-14 14:47:05.126749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:25.340 [2024-10-14 14:47:05.199689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:25.340 [2024-10-14 14:47:05.244112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:25.340 [2024-10-14 14:47:05.244150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:25.340 [2024-10-14 14:47:05.244158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:25.340 [2024-10-14 14:47:05.244165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:25.340 [2024-10-14 14:47:05.244171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:25.340 [2024-10-14 14:47:05.246101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.340 [2024-10-14 14:47:05.246254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:25.340 [2024-10-14 14:47:05.246516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:25.340 [2024-10-14 14:47:05.246518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.340 [2024-10-14 14:47:05.302947] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:25.340 [2024-10-14 14:47:05.302963] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:25.340 [2024-10-14 14:47:05.303881] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:25.340 [2024-10-14 14:47:05.304801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:25.340 [2024-10-14 14:47:05.304876] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.340 14:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:25.340 [2024-10-14 14:47:05.979175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:25.340 Malloc0 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.340 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:25.340 [2024-10-14 14:47:06.063287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:25.601 test case1: single bdev can't be used in multiple subsystems 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:25.601 [2024-10-14 14:47:06.099029] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:25.601 [2024-10-14 14:47:06.099055] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:25.601 [2024-10-14 14:47:06.099068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:25.601 request: 00:33:25.601 { 00:33:25.601 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:25.601 "namespace": { 00:33:25.601 "bdev_name": "Malloc0", 00:33:25.601 "no_auto_visible": false 00:33:25.601 }, 00:33:25.601 "method": "nvmf_subsystem_add_ns", 00:33:25.601 "req_id": 1 00:33:25.601 } 00:33:25.601 Got JSON-RPC error response 00:33:25.601 response: 00:33:25.601 { 00:33:25.601 "code": -32602, 00:33:25.601 "message": "Invalid parameters" 00:33:25.601 } 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:25.601 Adding namespace failed - expected result. 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:25.601 test case2: host connect to nvmf target in multiple paths 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:25.601 [2024-10-14 14:47:06.111146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:25.601 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:25.862 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:26.432 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:26.432 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:33:26.432 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:26.432 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:26.432 14:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:33:28.346 14:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:28.346 14:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:28.346 14:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:28.346 14:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:28.346 14:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:28.346 14:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:33:28.346 14:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:28.346 [global] 00:33:28.346 thread=1 00:33:28.346 invalidate=1 00:33:28.346 rw=write 00:33:28.346 time_based=1 00:33:28.346 runtime=1 00:33:28.346 ioengine=libaio 00:33:28.346 direct=1 00:33:28.346 bs=4096 00:33:28.346 iodepth=1 00:33:28.346 norandommap=0 00:33:28.346 numjobs=1 00:33:28.346 00:33:28.346 verify_dump=1 00:33:28.346 verify_backlog=512 00:33:28.346 verify_state_save=0 00:33:28.346 do_verify=1 00:33:28.346 verify=crc32c-intel 00:33:28.346 [job0] 00:33:28.346 filename=/dev/nvme0n1 00:33:28.346 Could not set queue depth (nvme0n1) 00:33:28.606 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:28.606 fio-3.35 00:33:28.606 Starting 1 thread 00:33:29.988 00:33:29.988 job0: (groupid=0, jobs=1): err= 0: pid=3655084: Mon Oct 14 14:47:10 2024 00:33:29.988 read: IOPS=487, BW=1950KiB/s (1997kB/s)(2032KiB/1042msec) 00:33:29.988 slat (nsec): min=6736, max=58421, avg=22588.92, stdev=7842.08 00:33:29.988 clat (usec): min=461, max=41993, avg=1573.51, stdev=5727.32 00:33:29.988 lat (usec): min=469, max=42018, avg=1596.09, stdev=5727.73 00:33:29.988 clat percentiles (usec): 00:33:29.988 | 1.00th=[ 537], 5.00th=[ 635], 10.00th=[ 676], 20.00th=[ 701], 00:33:29.988 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 783], 60.00th=[ 791], 00:33:29.988 | 70.00th=[ 799], 80.00th=[ 807], 90.00th=[ 824], 95.00th=[ 832], 00:33:29.988 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:29.988 | 99.99th=[42206] 00:33:29.988 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:33:29.988 slat (nsec): min=9545, max=50922, avg=25776.06, stdev=11123.68 00:33:29.988 clat (usec): min=204, max=569, avg=411.55, stdev=61.83 00:33:29.988 lat (usec): min=237, max=619, avg=437.33, stdev=67.35 00:33:29.988 clat percentiles (usec): 00:33:29.988 | 1.00th=[ 253], 5.00th=[ 314], 10.00th=[ 326], 20.00th=[ 347], 00:33:29.988 | 30.00th=[ 371], 40.00th=[ 404], 50.00th=[ 424], 60.00th=[ 441], 00:33:29.988 | 70.00th=[ 457], 80.00th=[ 469], 90.00th=[ 478], 95.00th=[ 490], 00:33:29.988 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 570], 99.95th=[ 570], 00:33:29.988 | 99.99th=[ 570] 00:33:29.988 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:29.988 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:29.988 lat (usec) : 250=0.39%, 500=48.33%, 750=15.78%, 1000=34.41% 00:33:29.988 lat (msec) : 2=0.10%, 50=0.98% 00:33:29.988 cpu : usr=1.44%, sys=2.31%, ctx=1020, majf=0, minf=1 00:33:29.988 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:29.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:29.988 issued rwts: total=508,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:29.988 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:29.988 00:33:29.988 Run status group 0 (all jobs): 00:33:29.988 READ: bw=1950KiB/s (1997kB/s), 1950KiB/s-1950KiB/s (1997kB/s-1997kB/s), io=2032KiB (2081kB), run=1042-1042msec 00:33:29.988 WRITE: bw=1965KiB/s (2013kB/s), 1965KiB/s-1965KiB/s (2013kB/s-2013kB/s), io=2048KiB (2097kB), run=1042-1042msec 00:33:29.988 00:33:29.988 Disk stats (read/write): 00:33:29.988 nvme0n1: ios=554/512, merge=0/0, ticks=669/204, in_queue=873, util=92.38% 00:33:29.988 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:29.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:29.988 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:29.988 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:33:29.988 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:29.988 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:29.988 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:29.989 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:29.989 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:33:29.989 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:29.989 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:29.989 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:29.989 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:29.989 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:29.989 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:29.989 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:29.989 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:29.989 rmmod nvme_tcp 00:33:29.989 rmmod nvme_fabrics 00:33:29.989 rmmod nvme_keyring 00:33:29.989 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 3654205 ']' 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 3654205 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3654205 ']' 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3654205 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3654205 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3654205' 00:33:30.250 killing process with pid 3654205 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3654205 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3654205 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.250 14:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:32.793 00:33:32.793 real 0m15.541s 00:33:32.793 user 0m36.483s 00:33:32.793 sys 0m7.468s 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:32.793 ************************************ 00:33:32.793 END TEST nvmf_nmic 00:33:32.793 ************************************ 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:32.793 ************************************ 00:33:32.793 START TEST nvmf_fio_target 00:33:32.793 ************************************ 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:32.793 * Looking for test storage... 00:33:32.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:32.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.793 --rc genhtml_branch_coverage=1 00:33:32.793 --rc genhtml_function_coverage=1 00:33:32.793 --rc genhtml_legend=1 00:33:32.793 --rc geninfo_all_blocks=1 00:33:32.793 --rc geninfo_unexecuted_blocks=1 00:33:32.793 00:33:32.793 ' 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:32.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.793 --rc genhtml_branch_coverage=1 00:33:32.793 --rc genhtml_function_coverage=1 00:33:32.793 --rc genhtml_legend=1 00:33:32.793 --rc geninfo_all_blocks=1 00:33:32.793 --rc geninfo_unexecuted_blocks=1 00:33:32.793 00:33:32.793 ' 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:32.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.793 --rc genhtml_branch_coverage=1 00:33:32.793 --rc genhtml_function_coverage=1 00:33:32.793 --rc genhtml_legend=1 00:33:32.793 --rc geninfo_all_blocks=1 00:33:32.793 --rc geninfo_unexecuted_blocks=1 00:33:32.793 00:33:32.793 ' 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:32.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:32.793 --rc genhtml_branch_coverage=1 00:33:32.793 --rc genhtml_function_coverage=1 00:33:32.793 --rc genhtml_legend=1 00:33:32.793 --rc geninfo_all_blocks=1 00:33:32.793 --rc geninfo_unexecuted_blocks=1 00:33:32.793 00:33:32.793 ' 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:32.793 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:32.794 14:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:40.937 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:40.937 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:40.937 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:40.938 Found net devices under 0000:31:00.0: cvl_0_0 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:40.938 Found net devices under 0000:31:00.1: cvl_0_1 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:40.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:40.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:33:40.938 00:33:40.938 --- 10.0.0.2 ping statistics --- 00:33:40.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.938 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:40.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:40.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:33:40.938 00:33:40.938 --- 10.0.0.1 ping statistics --- 00:33:40.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.938 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=3659720 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 3659720 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:40.938 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3659720 ']' 00:33:40.939 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.939 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:40.939 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.939 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:40.939 14:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:40.939 [2024-10-14 14:47:21.003371] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:40.939 [2024-10-14 14:47:21.004525] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:33:40.939 [2024-10-14 14:47:21.004577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.939 [2024-10-14 14:47:21.078222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:40.939 [2024-10-14 14:47:21.121992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.939 [2024-10-14 14:47:21.122031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.939 [2024-10-14 14:47:21.122039] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.939 [2024-10-14 14:47:21.122048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.939 [2024-10-14 14:47:21.122054] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.939 [2024-10-14 14:47:21.125107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.939 [2024-10-14 14:47:21.125383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:40.939 [2024-10-14 14:47:21.125528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.939 [2024-10-14 14:47:21.125528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:40.939 [2024-10-14 14:47:21.182163] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:40.939 [2024-10-14 14:47:21.182269] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:40.939 [2024-10-14 14:47:21.183338] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:40.939 [2024-10-14 14:47:21.184261] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:40.939 [2024-10-14 14:47:21.184327] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:41.200 14:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:41.200 14:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:33:41.200 14:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:41.200 14:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:41.200 14:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:41.200 14:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:41.200 14:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:41.461 [2024-10-14 14:47:21.998180] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:41.461 14:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:41.723 14:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:41.723 14:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:41.723 14:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:41.723 14:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:41.984 14:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:41.984 14:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:42.245 14:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:42.246 14:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:42.506 14:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:42.506 14:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:42.506 14:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:42.767 14:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:42.767 14:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:43.028 14:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:43.028 14:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:43.028 14:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:43.289 14:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:43.289 14:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:43.289 14:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:43.289 14:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:43.551 14:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:43.813 [2024-10-14 14:47:24.342241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.813 14:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:43.813 14:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:44.074 14:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:44.646 14:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:44.646 14:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:33:44.646 14:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:44.646 14:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:33:44.646 14:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:33:44.646 14:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:33:46.563 14:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:46.563 14:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:46.563 14:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:46.563 14:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:33:46.563 14:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:46.563 14:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:33:46.563 14:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:46.563 [global] 00:33:46.563 thread=1 00:33:46.563 invalidate=1 00:33:46.563 rw=write 00:33:46.563 time_based=1 00:33:46.563 runtime=1 00:33:46.563 ioengine=libaio 00:33:46.563 direct=1 00:33:46.563 bs=4096 00:33:46.563 iodepth=1 00:33:46.563 norandommap=0 00:33:46.563 numjobs=1 00:33:46.563 00:33:46.563 verify_dump=1 00:33:46.563 verify_backlog=512 00:33:46.563 verify_state_save=0 00:33:46.563 do_verify=1 00:33:46.563 verify=crc32c-intel 00:33:46.563 [job0] 00:33:46.563 filename=/dev/nvme0n1 00:33:46.563 [job1] 00:33:46.563 filename=/dev/nvme0n2 00:33:46.563 [job2] 00:33:46.563 filename=/dev/nvme0n3 00:33:46.563 [job3] 00:33:46.563 filename=/dev/nvme0n4 00:33:46.563 Could not set queue depth (nvme0n1) 00:33:46.563 Could not set queue depth (nvme0n2) 00:33:46.563 Could not set queue depth (nvme0n3) 00:33:46.563 Could not set queue depth (nvme0n4) 00:33:46.825 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:46.825 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:46.825 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:46.825 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:46.825 fio-3.35 00:33:46.825 Starting 4 threads 00:33:48.220 00:33:48.220 job0: (groupid=0, jobs=1): err= 0: pid=3661085: Mon Oct 14 14:47:28 2024 00:33:48.220 read: IOPS=22, BW=89.5KiB/s (91.6kB/s)(92.0KiB/1028msec) 00:33:48.220 slat (nsec): min=10090, max=30175, avg=26430.74, stdev=3645.04 00:33:48.220 clat (usec): min=1132, max=42094, avg=39688.72, stdev=8419.21 00:33:48.220 lat (usec): min=1162, max=42120, avg=39715.15, stdev=8418.37 00:33:48.220 clat percentiles (usec): 00:33:48.220 | 1.00th=[ 1139], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:48.220 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:33:48.220 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:48.220 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:48.220 | 99.99th=[42206] 00:33:48.220 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:33:48.220 slat (usec): min=9, max=1707, avg=14.08, stdev=75.05 00:33:48.220 clat (usec): min=109, max=466, avg=201.08, stdev=92.73 00:33:48.220 lat (usec): min=119, max=2117, avg=215.16, stdev=125.22 00:33:48.220 clat percentiles (usec): 00:33:48.220 | 1.00th=[ 112], 5.00th=[ 114], 10.00th=[ 115], 20.00th=[ 117], 00:33:48.220 | 30.00th=[ 119], 40.00th=[ 125], 50.00th=[ 131], 60.00th=[ 241], 00:33:48.220 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 338], 00:33:48.220 | 99.00th=[ 396], 99.50th=[ 420], 99.90th=[ 465], 99.95th=[ 465], 00:33:48.220 | 99.99th=[ 465] 00:33:48.220 bw ( KiB/s): min= 4096, max= 4096, per=41.12%, avg=4096.00, stdev= 0.00, samples=1 00:33:48.220 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:48.220 lat (usec) : 250=59.07%, 500=36.64% 00:33:48.220 lat (msec) : 2=0.19%, 50=4.11% 00:33:48.220 cpu : usr=0.10%, sys=0.78%, ctx=539, majf=0, minf=1 00:33:48.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:48.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.220 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:48.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:48.220 job1: (groupid=0, jobs=1): err= 0: pid=3661086: Mon Oct 14 14:47:28 2024 00:33:48.220 read: IOPS=15, BW=63.5KiB/s (65.0kB/s)(64.0KiB/1008msec) 00:33:48.220 slat (nsec): min=25323, max=29220, avg=26227.00, stdev=921.38 00:33:48.220 clat (usec): min=40688, max=42010, avg=41331.22, stdev=516.21 00:33:48.220 lat (usec): min=40717, max=42036, avg=41357.45, stdev=515.57 00:33:48.220 clat percentiles (usec): 00:33:48.220 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:48.220 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:48.220 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:48.220 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:48.220 | 99.99th=[42206] 00:33:48.220 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:33:48.220 slat (nsec): min=3082, max=67092, avg=26888.03, stdev=11445.04 00:33:48.220 clat (usec): min=238, max=1684, avg=642.96, stdev=136.62 00:33:48.220 lat (usec): min=241, max=1696, avg=669.84, stdev=139.39 00:33:48.220 clat percentiles (usec): 00:33:48.220 | 1.00th=[ 379], 5.00th=[ 412], 10.00th=[ 461], 20.00th=[ 529], 00:33:48.220 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:33:48.220 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 791], 95.00th=[ 848], 00:33:48.220 | 99.00th=[ 955], 99.50th=[ 988], 99.90th=[ 1680], 99.95th=[ 1680], 00:33:48.220 | 99.99th=[ 1680] 00:33:48.220 bw ( KiB/s): min= 4096, max= 4096, per=41.12%, avg=4096.00, stdev= 0.00, samples=1 00:33:48.220 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:48.220 lat (usec) : 250=0.19%, 500=15.34%, 750=64.20%, 1000=16.86% 00:33:48.220 lat (msec) : 2=0.38%, 50=3.03% 00:33:48.221 cpu : usr=0.20%, sys=1.79%, ctx=528, majf=0, minf=2 00:33:48.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:48.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.221 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:48.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:48.221 job2: (groupid=0, jobs=1): err= 0: pid=3661087: Mon Oct 14 14:47:28 2024 00:33:48.221 read: IOPS=15, BW=62.6KiB/s (64.1kB/s)(64.0KiB/1023msec) 00:33:48.221 slat (nsec): min=25101, max=26386, avg=25714.81, stdev=329.47 00:33:48.221 clat (usec): min=40944, max=42085, avg=41868.76, stdev=271.99 00:33:48.221 lat (usec): min=40970, max=42111, avg=41894.47, stdev=271.88 00:33:48.221 clat percentiles (usec): 00:33:48.221 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:33:48.221 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:33:48.221 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:48.221 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:48.221 | 99.99th=[42206] 00:33:48.221 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:33:48.221 slat (nsec): min=9617, max=69680, avg=30534.95, stdev=8955.92 00:33:48.221 clat (usec): min=205, max=1009, avg=643.87, stdev=127.78 00:33:48.221 lat (usec): min=216, max=1042, avg=674.40, stdev=130.93 00:33:48.221 clat percentiles (usec): 00:33:48.221 | 1.00th=[ 343], 5.00th=[ 416], 10.00th=[ 482], 20.00th=[ 537], 00:33:48.221 | 30.00th=[ 594], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:33:48.221 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 791], 95.00th=[ 857], 00:33:48.221 | 99.00th=[ 930], 99.50th=[ 963], 99.90th=[ 1012], 99.95th=[ 1012], 00:33:48.221 | 99.99th=[ 1012] 00:33:48.221 bw ( KiB/s): min= 4096, max= 4096, per=41.12%, avg=4096.00, stdev= 0.00, samples=1 00:33:48.221 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:48.221 lat (usec) : 250=0.19%, 500=12.69%, 750=66.29%, 1000=17.61% 00:33:48.221 lat (msec) : 2=0.19%, 50=3.03% 00:33:48.221 cpu : usr=0.78%, sys=1.47%, ctx=529, majf=0, minf=1 00:33:48.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:48.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.221 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:48.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:48.221 job3: (groupid=0, jobs=1): err= 0: pid=3661091: Mon Oct 14 14:47:28 2024 00:33:48.221 read: IOPS=638, BW=2553KiB/s (2615kB/s)(2556KiB/1001msec) 00:33:48.221 slat (nsec): min=6759, max=59820, avg=24394.09, stdev=7579.15 00:33:48.221 clat (usec): min=367, max=1162, avg=771.26, stdev=109.37 00:33:48.221 lat (usec): min=387, max=1189, avg=795.65, stdev=111.63 00:33:48.221 clat percentiles (usec): 00:33:48.221 | 1.00th=[ 474], 5.00th=[ 586], 10.00th=[ 627], 20.00th=[ 693], 00:33:48.221 | 30.00th=[ 717], 40.00th=[ 750], 50.00th=[ 791], 60.00th=[ 807], 00:33:48.221 | 70.00th=[ 824], 80.00th=[ 848], 90.00th=[ 898], 95.00th=[ 955], 00:33:48.221 | 99.00th=[ 1029], 99.50th=[ 1037], 99.90th=[ 1156], 99.95th=[ 1156], 00:33:48.221 | 99.99th=[ 1156] 00:33:48.221 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:33:48.221 slat (nsec): min=9894, max=58116, avg=32706.54, stdev=8093.34 00:33:48.221 clat (usec): min=126, max=925, avg=432.31, stdev=120.66 00:33:48.221 lat (usec): min=138, max=960, avg=465.02, stdev=122.89 00:33:48.221 clat percentiles (usec): 00:33:48.221 | 1.00th=[ 202], 5.00th=[ 255], 10.00th=[ 293], 20.00th=[ 322], 00:33:48.221 | 30.00th=[ 347], 40.00th=[ 400], 50.00th=[ 420], 60.00th=[ 461], 00:33:48.221 | 70.00th=[ 502], 80.00th=[ 537], 90.00th=[ 594], 95.00th=[ 635], 00:33:48.221 | 99.00th=[ 734], 99.50th=[ 766], 99.90th=[ 881], 99.95th=[ 922], 00:33:48.221 | 99.99th=[ 922] 00:33:48.221 bw ( KiB/s): min= 4096, max= 4096, per=41.12%, avg=4096.00, stdev= 0.00, samples=1 00:33:48.221 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:48.221 lat (usec) : 250=2.89%, 500=40.59%, 750=33.07%, 1000=22.61% 00:33:48.221 lat (msec) : 2=0.84% 00:33:48.221 cpu : usr=2.40%, sys=5.10%, ctx=1666, majf=0, minf=1 00:33:48.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:48.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:48.221 issued rwts: total=639,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:48.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:48.221 00:33:48.221 Run status group 0 (all jobs): 00:33:48.221 READ: bw=2700KiB/s (2765kB/s), 62.6KiB/s-2553KiB/s (64.1kB/s-2615kB/s), io=2776KiB (2843kB), run=1001-1028msec 00:33:48.221 WRITE: bw=9961KiB/s (10.2MB/s), 1992KiB/s-4092KiB/s (2040kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1028msec 00:33:48.221 00:33:48.221 Disk stats (read/write): 00:33:48.221 nvme0n1: ios=77/512, merge=0/0, ticks=811/102, in_queue=913, util=87.07% 00:33:48.221 nvme0n2: ios=61/512, merge=0/0, ticks=554/328, in_queue=882, util=91.01% 00:33:48.221 nvme0n3: ios=68/512, merge=0/0, ticks=580/316, in_queue=896, util=95.34% 00:33:48.221 nvme0n4: ios=534/868, merge=0/0, ticks=1268/366, in_queue=1634, util=93.89% 00:33:48.221 14:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:48.221 [global] 00:33:48.221 thread=1 00:33:48.221 invalidate=1 00:33:48.221 rw=randwrite 00:33:48.221 time_based=1 00:33:48.221 runtime=1 00:33:48.221 ioengine=libaio 00:33:48.221 direct=1 00:33:48.221 bs=4096 00:33:48.221 iodepth=1 00:33:48.221 norandommap=0 00:33:48.221 numjobs=1 00:33:48.221 00:33:48.221 verify_dump=1 00:33:48.221 verify_backlog=512 00:33:48.221 verify_state_save=0 00:33:48.221 do_verify=1 00:33:48.221 verify=crc32c-intel 00:33:48.221 [job0] 00:33:48.221 filename=/dev/nvme0n1 00:33:48.221 [job1] 00:33:48.221 filename=/dev/nvme0n2 00:33:48.221 [job2] 00:33:48.221 filename=/dev/nvme0n3 00:33:48.221 [job3] 00:33:48.221 filename=/dev/nvme0n4 00:33:48.221 Could not set queue depth (nvme0n1) 00:33:48.221 Could not set queue depth (nvme0n2) 00:33:48.221 Could not set queue depth (nvme0n3) 00:33:48.221 Could not set queue depth (nvme0n4) 00:33:48.481 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:48.482 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:48.482 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:48.482 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:48.482 fio-3.35 00:33:48.482 Starting 4 threads 00:33:49.870 00:33:49.870 job0: (groupid=0, jobs=1): err= 0: pid=3661600: Mon Oct 14 14:47:30 2024 00:33:49.870 read: IOPS=439, BW=1758KiB/s (1800kB/s)(1760KiB/1001msec) 00:33:49.870 slat (nsec): min=7339, max=65804, avg=25774.94, stdev=6237.11 00:33:49.870 clat (usec): min=326, max=41232, avg=1822.83, stdev=6288.35 00:33:49.870 lat (usec): min=353, max=41243, avg=1848.60, stdev=6288.37 00:33:49.870 clat percentiles (usec): 00:33:49.870 | 1.00th=[ 424], 5.00th=[ 545], 10.00th=[ 603], 20.00th=[ 676], 00:33:49.870 | 30.00th=[ 766], 40.00th=[ 824], 50.00th=[ 857], 60.00th=[ 873], 00:33:49.870 | 70.00th=[ 889], 80.00th=[ 906], 90.00th=[ 930], 95.00th=[ 971], 00:33:49.870 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:49.870 | 99.99th=[41157] 00:33:49.870 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:33:49.870 slat (nsec): min=9577, max=52822, avg=25408.62, stdev=11730.09 00:33:49.870 clat (usec): min=113, max=881, avg=326.79, stdev=111.90 00:33:49.870 lat (usec): min=124, max=914, avg=352.20, stdev=113.29 00:33:49.870 clat percentiles (usec): 00:33:49.870 | 1.00th=[ 120], 5.00th=[ 143], 10.00th=[ 198], 20.00th=[ 269], 00:33:49.870 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 330], 00:33:49.870 | 70.00th=[ 347], 80.00th=[ 379], 90.00th=[ 441], 95.00th=[ 515], 00:33:49.870 | 99.00th=[ 775], 99.50th=[ 832], 99.90th=[ 881], 99.95th=[ 881], 00:33:49.870 | 99.99th=[ 881] 00:33:49.870 bw ( KiB/s): min= 4087, max= 4087, per=33.84%, avg=4087.00, stdev= 0.00, samples=1 00:33:49.870 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:33:49.870 lat (usec) : 250=9.03%, 500=43.17%, 750=13.97%, 1000=32.04% 00:33:49.870 lat (msec) : 2=0.42%, 4=0.11%, 10=0.11%, 50=1.16% 00:33:49.870 cpu : usr=1.20%, sys=2.70%, ctx=954, majf=0, minf=1 00:33:49.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.870 issued rwts: total=440,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:49.870 job1: (groupid=0, jobs=1): err= 0: pid=3661601: Mon Oct 14 14:47:30 2024 00:33:49.870 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:49.870 slat (nsec): min=6637, max=45032, avg=22468.41, stdev=7665.25 00:33:49.870 clat (usec): min=153, max=42269, avg=1217.60, stdev=4579.93 00:33:49.870 lat (usec): min=172, max=42294, avg=1240.07, stdev=4580.18 00:33:49.870 clat percentiles (usec): 00:33:49.870 | 1.00th=[ 192], 5.00th=[ 245], 10.00th=[ 262], 20.00th=[ 351], 00:33:49.870 | 30.00th=[ 383], 40.00th=[ 457], 50.00th=[ 486], 60.00th=[ 660], 00:33:49.870 | 70.00th=[ 988], 80.00th=[ 1172], 90.00th=[ 1254], 95.00th=[ 1450], 00:33:49.870 | 99.00th=[40633], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:33:49.870 | 99.99th=[42206] 00:33:49.870 write: IOPS=745, BW=2981KiB/s (3053kB/s)(2984KiB/1001msec); 0 zone resets 00:33:49.870 slat (nsec): min=9357, max=52965, avg=26754.84, stdev=9931.53 00:33:49.870 clat (usec): min=104, max=931, avg=450.57, stdev=162.19 00:33:49.870 lat (usec): min=114, max=977, avg=477.33, stdev=166.63 00:33:49.870 clat percentiles (usec): 00:33:49.870 | 1.00th=[ 112], 5.00th=[ 174], 10.00th=[ 231], 20.00th=[ 306], 00:33:49.870 | 30.00th=[ 334], 40.00th=[ 396], 50.00th=[ 486], 60.00th=[ 529], 00:33:49.870 | 70.00th=[ 562], 80.00th=[ 578], 90.00th=[ 644], 95.00th=[ 701], 00:33:49.870 | 99.00th=[ 791], 99.50th=[ 865], 99.90th=[ 930], 99.95th=[ 930], 00:33:49.870 | 99.99th=[ 930] 00:33:49.870 bw ( KiB/s): min= 4096, max= 4096, per=33.92%, avg=4096.00, stdev= 0.00, samples=1 00:33:49.870 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:49.870 lat (usec) : 250=9.54%, 500=42.13%, 750=31.96%, 1000=4.61% 00:33:49.870 lat (msec) : 2=11.13%, 4=0.08%, 50=0.56% 00:33:49.870 cpu : usr=1.70%, sys=3.30%, ctx=1258, majf=0, minf=2 00:33:49.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.870 issued rwts: total=512,746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:49.870 job2: (groupid=0, jobs=1): err= 0: pid=3661602: Mon Oct 14 14:47:30 2024 00:33:49.870 read: IOPS=656, BW=2625KiB/s (2688kB/s)(2628KiB/1001msec) 00:33:49.870 slat (nsec): min=7568, max=48022, avg=26683.65, stdev=5239.10 00:33:49.870 clat (usec): min=303, max=1185, avg=773.12, stdev=146.75 00:33:49.870 lat (usec): min=330, max=1212, avg=799.80, stdev=147.07 00:33:49.870 clat percentiles (usec): 00:33:49.870 | 1.00th=[ 437], 5.00th=[ 537], 10.00th=[ 562], 20.00th=[ 635], 00:33:49.870 | 30.00th=[ 685], 40.00th=[ 734], 50.00th=[ 791], 60.00th=[ 848], 00:33:49.870 | 70.00th=[ 873], 80.00th=[ 906], 90.00th=[ 938], 95.00th=[ 979], 00:33:49.870 | 99.00th=[ 1074], 99.50th=[ 1090], 99.90th=[ 1188], 99.95th=[ 1188], 00:33:49.870 | 99.99th=[ 1188] 00:33:49.870 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:33:49.870 slat (nsec): min=9846, max=53218, avg=29394.48, stdev=10277.68 00:33:49.870 clat (usec): min=132, max=3080, avg=421.28, stdev=132.78 00:33:49.870 lat (usec): min=144, max=3114, avg=450.67, stdev=134.67 00:33:49.870 clat percentiles (usec): 00:33:49.870 | 1.00th=[ 202], 5.00th=[ 273], 10.00th=[ 293], 20.00th=[ 330], 00:33:49.870 | 30.00th=[ 355], 40.00th=[ 388], 50.00th=[ 433], 60.00th=[ 453], 00:33:49.870 | 70.00th=[ 469], 80.00th=[ 494], 90.00th=[ 529], 95.00th=[ 594], 00:33:49.870 | 99.00th=[ 709], 99.50th=[ 734], 99.90th=[ 1270], 99.95th=[ 3097], 00:33:49.870 | 99.99th=[ 3097] 00:33:49.870 bw ( KiB/s): min= 4096, max= 4096, per=33.92%, avg=4096.00, stdev= 0.00, samples=1 00:33:49.870 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:49.870 lat (usec) : 250=1.96%, 500=50.15%, 750=25.10%, 1000=21.42% 00:33:49.870 lat (msec) : 2=1.31%, 4=0.06% 00:33:49.870 cpu : usr=2.40%, sys=4.90%, ctx=1683, majf=0, minf=1 00:33:49.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.870 issued rwts: total=657,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:49.870 job3: (groupid=0, jobs=1): err= 0: pid=3661603: Mon Oct 14 14:47:30 2024 00:33:49.870 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:49.870 slat (nsec): min=7016, max=62807, avg=24591.98, stdev=6781.86 00:33:49.870 clat (usec): min=454, max=41253, avg=1024.66, stdev=1905.32 00:33:49.870 lat (usec): min=481, max=41279, avg=1049.26, stdev=1905.36 00:33:49.870 clat percentiles (usec): 00:33:49.870 | 1.00th=[ 537], 5.00th=[ 619], 10.00th=[ 685], 20.00th=[ 758], 00:33:49.870 | 30.00th=[ 807], 40.00th=[ 832], 50.00th=[ 857], 60.00th=[ 898], 00:33:49.870 | 70.00th=[ 996], 80.00th=[ 1106], 90.00th=[ 1188], 95.00th=[ 1352], 00:33:49.870 | 99.00th=[ 1729], 99.50th=[ 2245], 99.90th=[41157], 99.95th=[41157], 00:33:49.870 | 99.99th=[41157] 00:33:49.870 write: IOPS=739, BW=2957KiB/s (3028kB/s)(2960KiB/1001msec); 0 zone resets 00:33:49.870 slat (nsec): min=9646, max=65348, avg=28161.99, stdev=9394.15 00:33:49.870 clat (usec): min=117, max=1009, avg=584.57, stdev=140.38 00:33:49.870 lat (usec): min=129, max=1041, avg=612.73, stdev=145.39 00:33:49.870 clat percentiles (usec): 00:33:49.870 | 1.00th=[ 269], 5.00th=[ 297], 10.00th=[ 379], 20.00th=[ 478], 00:33:49.870 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:33:49.870 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 783], 00:33:49.870 | 99.00th=[ 914], 99.50th=[ 938], 99.90th=[ 1012], 99.95th=[ 1012], 00:33:49.870 | 99.99th=[ 1012] 00:33:49.870 bw ( KiB/s): min= 4087, max= 4087, per=33.84%, avg=4087.00, stdev= 0.00, samples=1 00:33:49.870 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:33:49.870 lat (usec) : 250=0.32%, 500=14.38%, 750=46.65%, 1000=26.68% 00:33:49.870 lat (msec) : 2=11.74%, 4=0.08%, 20=0.08%, 50=0.08% 00:33:49.870 cpu : usr=1.70%, sys=3.60%, ctx=1253, majf=0, minf=2 00:33:49.870 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.870 issued rwts: total=512,740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.870 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:49.870 00:33:49.870 Run status group 0 (all jobs): 00:33:49.870 READ: bw=8476KiB/s (8679kB/s), 1758KiB/s-2625KiB/s (1800kB/s-2688kB/s), io=8484KiB (8688kB), run=1001-1001msec 00:33:49.870 WRITE: bw=11.8MiB/s (12.4MB/s), 2046KiB/s-4092KiB/s (2095kB/s-4190kB/s), io=11.8MiB (12.4MB), run=1001-1001msec 00:33:49.870 00:33:49.870 Disk stats (read/write): 00:33:49.870 nvme0n1: ios=272/512, merge=0/0, ticks=1515/162, in_queue=1677, util=88.28% 00:33:49.870 nvme0n2: ios=351/512, merge=0/0, ticks=640/258, in_queue=898, util=91.73% 00:33:49.870 nvme0n3: ios=554/957, merge=0/0, ticks=1306/381, in_queue=1687, util=97.46% 00:33:49.870 nvme0n4: ios=555/512, merge=0/0, ticks=606/281, in_queue=887, util=96.36% 00:33:49.870 14:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:49.870 [global] 00:33:49.870 thread=1 00:33:49.870 invalidate=1 00:33:49.870 rw=write 00:33:49.870 time_based=1 00:33:49.870 runtime=1 00:33:49.870 ioengine=libaio 00:33:49.870 direct=1 00:33:49.870 bs=4096 00:33:49.870 iodepth=128 00:33:49.870 norandommap=0 00:33:49.870 numjobs=1 00:33:49.870 00:33:49.870 verify_dump=1 00:33:49.870 verify_backlog=512 00:33:49.870 verify_state_save=0 00:33:49.871 do_verify=1 00:33:49.871 verify=crc32c-intel 00:33:49.871 [job0] 00:33:49.871 filename=/dev/nvme0n1 00:33:49.871 [job1] 00:33:49.871 filename=/dev/nvme0n2 00:33:49.871 [job2] 00:33:49.871 filename=/dev/nvme0n3 00:33:49.871 [job3] 00:33:49.871 filename=/dev/nvme0n4 00:33:49.871 Could not set queue depth (nvme0n1) 00:33:49.871 Could not set queue depth (nvme0n2) 00:33:49.871 Could not set queue depth (nvme0n3) 00:33:49.871 Could not set queue depth (nvme0n4) 00:33:50.130 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:50.130 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:50.130 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:50.130 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:50.130 fio-3.35 00:33:50.130 Starting 4 threads 00:33:51.514 00:33:51.514 job0: (groupid=0, jobs=1): err= 0: pid=3662129: Mon Oct 14 14:47:32 2024 00:33:51.514 read: IOPS=6019, BW=23.5MiB/s (24.7MB/s)(23.6MiB/1005msec) 00:33:51.514 slat (nsec): min=932, max=17108k, avg=76422.95, stdev=633461.55 00:33:51.514 clat (usec): min=2337, max=35566, avg=10601.26, stdev=3828.92 00:33:51.514 lat (usec): min=3763, max=35581, avg=10677.68, stdev=3882.66 00:33:51.514 clat percentiles (usec): 00:33:51.514 | 1.00th=[ 4948], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7373], 00:33:51.514 | 30.00th=[ 7570], 40.00th=[ 8356], 50.00th=[ 9896], 60.00th=[10814], 00:33:51.514 | 70.00th=[11994], 80.00th=[13304], 90.00th=[15926], 95.00th=[19268], 00:33:51.514 | 99.00th=[20579], 99.50th=[21890], 99.90th=[25822], 99.95th=[27132], 00:33:51.514 | 99.99th=[35390] 00:33:51.514 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:33:51.514 slat (nsec): min=1680, max=17080k, avg=66644.90, stdev=586144.98 00:33:51.514 clat (usec): min=1279, max=33009, avg=10312.23, stdev=5261.13 00:33:51.514 lat (usec): min=1289, max=33015, avg=10378.87, stdev=5303.49 00:33:51.514 clat percentiles (usec): 00:33:51.514 | 1.00th=[ 3032], 5.00th=[ 4359], 10.00th=[ 5014], 20.00th=[ 6063], 00:33:51.514 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 7963], 60.00th=[10945], 00:33:51.514 | 70.00th=[12911], 80.00th=[13698], 90.00th=[18482], 95.00th=[19006], 00:33:51.514 | 99.00th=[28705], 99.50th=[30278], 99.90th=[31589], 99.95th=[31851], 00:33:51.514 | 99.99th=[32900] 00:33:51.514 bw ( KiB/s): min=20480, max=28672, per=25.78%, avg=24576.00, stdev=5792.62, samples=2 00:33:51.514 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:33:51.514 lat (msec) : 2=0.08%, 4=1.63%, 10=51.67%, 20=41.98%, 50=4.63% 00:33:51.514 cpu : usr=4.58%, sys=6.27%, ctx=395, majf=0, minf=1 00:33:51.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:51.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:51.514 issued rwts: total=6050,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:51.514 job1: (groupid=0, jobs=1): err= 0: pid=3662130: Mon Oct 14 14:47:32 2024 00:33:51.514 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:33:51.514 slat (nsec): min=1007, max=11668k, avg=87497.73, stdev=656345.98 00:33:51.514 clat (usec): min=2207, max=37945, avg=10510.09, stdev=5708.74 00:33:51.514 lat (usec): min=2214, max=37954, avg=10597.59, stdev=5755.84 00:33:51.514 clat percentiles (usec): 00:33:51.514 | 1.00th=[ 4178], 5.00th=[ 5669], 10.00th=[ 6521], 20.00th=[ 6849], 00:33:51.514 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 8717], 60.00th=[10159], 00:33:51.514 | 70.00th=[11994], 80.00th=[13042], 90.00th=[13829], 95.00th=[19530], 00:33:51.514 | 99.00th=[36439], 99.50th=[36963], 99.90th=[38011], 99.95th=[38011], 00:33:51.514 | 99.99th=[38011] 00:33:51.514 write: IOPS=5056, BW=19.8MiB/s (20.7MB/s)(19.9MiB/1009msec); 0 zone resets 00:33:51.514 slat (nsec): min=1700, max=10596k, avg=112822.54, stdev=554404.32 00:33:51.514 clat (usec): min=1178, max=72328, avg=15589.55, stdev=16095.18 00:33:51.514 lat (usec): min=1190, max=72336, avg=15702.37, stdev=16206.22 00:33:51.514 clat percentiles (usec): 00:33:51.514 | 1.00th=[ 2835], 5.00th=[ 4555], 10.00th=[ 5997], 20.00th=[ 6652], 00:33:51.514 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[10028], 60.00th=[12649], 00:33:51.514 | 70.00th=[13435], 80.00th=[14222], 90.00th=[43254], 95.00th=[61604], 00:33:51.514 | 99.00th=[66323], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828], 00:33:51.514 | 99.99th=[71828] 00:33:51.514 bw ( KiB/s): min=15216, max=24576, per=20.87%, avg=19896.00, stdev=6618.52, samples=2 00:33:51.514 iops : min= 3804, max= 6144, avg=4974.00, stdev=1654.63, samples=2 00:33:51.514 lat (msec) : 2=0.02%, 4=2.43%, 10=51.84%, 20=35.18%, 50=5.89% 00:33:51.514 lat (msec) : 100=4.63% 00:33:51.514 cpu : usr=3.47%, sys=4.56%, ctx=637, majf=0, minf=2 00:33:51.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:51.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:51.514 issued rwts: total=4608,5102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:51.514 job2: (groupid=0, jobs=1): err= 0: pid=3662131: Mon Oct 14 14:47:32 2024 00:33:51.514 read: IOPS=8069, BW=31.5MiB/s (33.1MB/s)(31.7MiB/1006msec) 00:33:51.514 slat (nsec): min=1028, max=6595.9k, avg=60484.82, stdev=440657.56 00:33:51.514 clat (usec): min=3707, max=15761, avg=8112.82, stdev=1964.12 00:33:51.514 lat (usec): min=4341, max=15767, avg=8173.31, stdev=1985.01 00:33:51.514 clat percentiles (usec): 00:33:51.514 | 1.00th=[ 4752], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6390], 00:33:51.514 | 30.00th=[ 6915], 40.00th=[ 7373], 50.00th=[ 7767], 60.00th=[ 8225], 00:33:51.514 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[11076], 95.00th=[11600], 00:33:51.514 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14091], 99.95th=[14484], 00:33:51.515 | 99.99th=[15795] 00:33:51.515 write: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec); 0 zone resets 00:33:51.515 slat (nsec): min=1733, max=6193.2k, avg=57472.75, stdev=406559.19 00:33:51.515 clat (usec): min=2361, max=14031, avg=7508.16, stdev=1820.07 00:33:51.515 lat (usec): min=2370, max=14034, avg=7565.64, stdev=1821.37 00:33:51.515 clat percentiles (usec): 00:33:51.515 | 1.00th=[ 3752], 5.00th=[ 4948], 10.00th=[ 5211], 20.00th=[ 5800], 00:33:51.515 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7242], 60.00th=[ 7701], 00:33:51.515 | 70.00th=[ 8094], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:33:51.515 | 99.00th=[10945], 99.50th=[10945], 99.90th=[11469], 99.95th=[13829], 00:33:51.515 | 99.99th=[14091] 00:33:51.515 bw ( KiB/s): min=32768, max=32768, per=34.37%, avg=32768.00, stdev= 0.00, samples=2 00:33:51.515 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:33:51.515 lat (msec) : 4=0.57%, 10=80.34%, 20=19.09% 00:33:51.515 cpu : usr=4.98%, sys=8.06%, ctx=502, majf=0, minf=1 00:33:51.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:33:51.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:51.515 issued rwts: total=8118,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:51.515 job3: (groupid=0, jobs=1): err= 0: pid=3662133: Mon Oct 14 14:47:32 2024 00:33:51.515 read: IOPS=4568, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:33:51.515 slat (nsec): min=1010, max=17782k, avg=134778.46, stdev=993346.58 00:33:51.515 clat (usec): min=2463, max=81559, avg=15595.40, stdev=9114.16 00:33:51.515 lat (usec): min=3160, max=81563, avg=15730.17, stdev=9212.03 00:33:51.515 clat percentiles (usec): 00:33:51.515 | 1.00th=[ 6849], 5.00th=[ 7832], 10.00th=[ 8455], 20.00th=[10028], 00:33:51.515 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11863], 60.00th=[14091], 00:33:51.515 | 70.00th=[16057], 80.00th=[20317], 90.00th=[28443], 95.00th=[31327], 00:33:51.515 | 99.00th=[54264], 99.50th=[65799], 99.90th=[81265], 99.95th=[81265], 00:33:51.515 | 99.99th=[81265] 00:33:51.515 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:33:51.515 slat (nsec): min=1751, max=8814.0k, avg=78011.41, stdev=393055.54 00:33:51.515 clat (usec): min=2550, max=81541, avg=12155.02, stdev=7408.02 00:33:51.515 lat (usec): min=2558, max=81543, avg=12233.04, stdev=7418.32 00:33:51.515 clat percentiles (usec): 00:33:51.515 | 1.00th=[ 3884], 5.00th=[ 6325], 10.00th=[ 7439], 20.00th=[ 9110], 00:33:51.515 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:33:51.515 | 70.00th=[11207], 80.00th=[11338], 90.00th=[15139], 95.00th=[27132], 00:33:51.515 | 99.00th=[53740], 99.50th=[63177], 99.90th=[65799], 99.95th=[65799], 00:33:51.515 | 99.99th=[81265] 00:33:51.515 bw ( KiB/s): min=18096, max=18768, per=19.34%, avg=18432.00, stdev=475.18, samples=2 00:33:51.515 iops : min= 4524, max= 4692, avg=4608.00, stdev=118.79, samples=2 00:33:51.515 lat (msec) : 4=0.80%, 10=21.31%, 20=63.26%, 50=13.51%, 100=1.12% 00:33:51.515 cpu : usr=3.38%, sys=4.57%, ctx=548, majf=0, minf=1 00:33:51.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:33:51.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:51.515 issued rwts: total=4600,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:51.515 00:33:51.515 Run status group 0 (all jobs): 00:33:51.515 READ: bw=90.5MiB/s (94.9MB/s), 17.8MiB/s-31.5MiB/s (18.7MB/s-33.1MB/s), io=91.3MiB (95.7MB), run=1005-1009msec 00:33:51.515 WRITE: bw=93.1MiB/s (97.6MB/s), 17.9MiB/s-31.8MiB/s (18.7MB/s-33.4MB/s), io=93.9MiB (98.5MB), run=1005-1009msec 00:33:51.515 00:33:51.515 Disk stats (read/write): 00:33:51.515 nvme0n1: ios=4635/4691, merge=0/0, ticks=51243/52037, in_queue=103280, util=84.37% 00:33:51.515 nvme0n2: ios=4661/4639, merge=0/0, ticks=46399/54858, in_queue=101257, util=88.69% 00:33:51.515 nvme0n3: ios=6707/6840, merge=0/0, ticks=51861/48920, in_queue=100781, util=95.15% 00:33:51.515 nvme0n4: ios=3645/3671, merge=0/0, ticks=58421/45006, in_queue=103427, util=94.76% 00:33:51.515 14:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:51.515 [global] 00:33:51.515 thread=1 00:33:51.515 invalidate=1 00:33:51.515 rw=randwrite 00:33:51.515 time_based=1 00:33:51.515 runtime=1 00:33:51.515 ioengine=libaio 00:33:51.515 direct=1 00:33:51.515 bs=4096 00:33:51.515 iodepth=128 00:33:51.515 norandommap=0 00:33:51.515 numjobs=1 00:33:51.515 00:33:51.515 verify_dump=1 00:33:51.515 verify_backlog=512 00:33:51.515 verify_state_save=0 00:33:51.515 do_verify=1 00:33:51.515 verify=crc32c-intel 00:33:51.515 [job0] 00:33:51.515 filename=/dev/nvme0n1 00:33:51.515 [job1] 00:33:51.515 filename=/dev/nvme0n2 00:33:51.515 [job2] 00:33:51.515 filename=/dev/nvme0n3 00:33:51.515 [job3] 00:33:51.515 filename=/dev/nvme0n4 00:33:51.515 Could not set queue depth (nvme0n1) 00:33:51.515 Could not set queue depth (nvme0n2) 00:33:51.515 Could not set queue depth (nvme0n3) 00:33:51.515 Could not set queue depth (nvme0n4) 00:33:51.774 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:51.774 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:51.774 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:51.774 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:51.774 fio-3.35 00:33:51.774 Starting 4 threads 00:33:53.157 00:33:53.157 job0: (groupid=0, jobs=1): err= 0: pid=3662655: Mon Oct 14 14:47:33 2024 00:33:53.157 read: IOPS=5546, BW=21.7MiB/s (22.7MB/s)(21.8MiB/1008msec) 00:33:53.157 slat (nsec): min=921, max=10597k, avg=75714.84, stdev=538744.53 00:33:53.157 clat (usec): min=3267, max=29194, avg=9839.06, stdev=3778.39 00:33:53.157 lat (usec): min=3273, max=29197, avg=9914.77, stdev=3803.13 00:33:53.157 clat percentiles (usec): 00:33:53.157 | 1.00th=[ 4113], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6456], 00:33:53.157 | 30.00th=[ 7439], 40.00th=[ 8356], 50.00th=[ 9241], 60.00th=[10028], 00:33:53.157 | 70.00th=[10814], 80.00th=[12518], 90.00th=[15139], 95.00th=[16909], 00:33:53.157 | 99.00th=[23462], 99.50th=[23725], 99.90th=[26870], 99.95th=[29230], 00:33:53.157 | 99.99th=[29230] 00:33:53.157 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:33:53.157 slat (nsec): min=1497, max=9481.3k, avg=97512.41, stdev=562248.89 00:33:53.157 clat (usec): min=1118, max=61525, avg=12867.96, stdev=10069.11 00:33:53.157 lat (usec): min=1128, max=61533, avg=12965.47, stdev=10136.69 00:33:53.157 clat percentiles (usec): 00:33:53.157 | 1.00th=[ 3425], 5.00th=[ 4621], 10.00th=[ 5407], 20.00th=[ 6063], 00:33:53.157 | 30.00th=[ 7570], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[11207], 00:33:53.157 | 70.00th=[13042], 80.00th=[16188], 90.00th=[26346], 95.00th=[32637], 00:33:53.157 | 99.00th=[57934], 99.50th=[59507], 99.90th=[61604], 99.95th=[61604], 00:33:53.157 | 99.99th=[61604] 00:33:53.157 bw ( KiB/s): min=21104, max=23952, per=25.92%, avg=22528.00, stdev=2013.84, samples=2 00:33:53.157 iops : min= 5276, max= 5988, avg=5632.00, stdev=503.46, samples=2 00:33:53.157 lat (msec) : 2=0.04%, 4=1.19%, 10=54.77%, 20=36.73%, 50=6.14% 00:33:53.157 lat (msec) : 100=1.13% 00:33:53.157 cpu : usr=3.87%, sys=5.46%, ctx=490, majf=0, minf=1 00:33:53.157 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:33:53.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:53.157 issued rwts: total=5591,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.157 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:53.157 job1: (groupid=0, jobs=1): err= 0: pid=3662656: Mon Oct 14 14:47:33 2024 00:33:53.157 read: IOPS=4513, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1006msec) 00:33:53.157 slat (nsec): min=938, max=18763k, avg=110470.26, stdev=761703.54 00:33:53.157 clat (usec): min=2898, max=48069, avg=16003.23, stdev=7957.28 00:33:53.157 lat (usec): min=5007, max=49196, avg=16113.70, stdev=8011.07 00:33:53.157 clat percentiles (usec): 00:33:53.157 | 1.00th=[ 5932], 5.00th=[ 7504], 10.00th=[ 8586], 20.00th=[ 9896], 00:33:53.157 | 30.00th=[10814], 40.00th=[11600], 50.00th=[14091], 60.00th=[15533], 00:33:53.157 | 70.00th=[17433], 80.00th=[21365], 90.00th=[28967], 95.00th=[31851], 00:33:53.157 | 99.00th=[41681], 99.50th=[44303], 99.90th=[47449], 99.95th=[47973], 00:33:53.157 | 99.99th=[47973] 00:33:53.157 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:33:53.157 slat (nsec): min=1693, max=36681k, avg=86924.71, stdev=760711.87 00:33:53.157 clat (usec): min=1220, max=51166, avg=11877.26, stdev=8615.22 00:33:53.157 lat (usec): min=1230, max=52102, avg=11964.18, stdev=8668.82 00:33:53.158 clat percentiles (usec): 00:33:53.158 | 1.00th=[ 3032], 5.00th=[ 4621], 10.00th=[ 5473], 20.00th=[ 6849], 00:33:53.158 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[10028], 00:33:53.158 | 70.00th=[11469], 80.00th=[12780], 90.00th=[21627], 95.00th=[37487], 00:33:53.158 | 99.00th=[45351], 99.50th=[49546], 99.90th=[51119], 99.95th=[51119], 00:33:53.158 | 99.99th=[51119] 00:33:53.158 bw ( KiB/s): min=16384, max=20480, per=21.21%, avg=18432.00, stdev=2896.31, samples=2 00:33:53.158 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:33:53.158 lat (msec) : 2=0.24%, 4=0.63%, 10=39.23%, 20=42.16%, 50=17.56% 00:33:53.158 lat (msec) : 100=0.17% 00:33:53.158 cpu : usr=3.58%, sys=5.17%, ctx=291, majf=0, minf=1 00:33:53.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:33:53.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:53.158 issued rwts: total=4541,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:53.158 job2: (groupid=0, jobs=1): err= 0: pid=3662657: Mon Oct 14 14:47:33 2024 00:33:53.158 read: IOPS=6864, BW=26.8MiB/s (28.1MB/s)(26.9MiB/1004msec) 00:33:53.158 slat (nsec): min=1043, max=12021k, avg=68166.00, stdev=502320.28 00:33:53.158 clat (usec): min=2293, max=22841, avg=9046.74, stdev=2992.85 00:33:53.158 lat (usec): min=2959, max=22850, avg=9114.91, stdev=3013.82 00:33:53.158 clat percentiles (usec): 00:33:53.158 | 1.00th=[ 4228], 5.00th=[ 5211], 10.00th=[ 5800], 20.00th=[ 6390], 00:33:53.158 | 30.00th=[ 7242], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[ 9372], 00:33:53.158 | 70.00th=[10290], 80.00th=[11338], 90.00th=[13173], 95.00th=[14877], 00:33:53.158 | 99.00th=[17695], 99.50th=[20317], 99.90th=[20317], 99.95th=[20317], 00:33:53.158 | 99.99th=[22938] 00:33:53.158 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:33:53.158 slat (nsec): min=1623, max=8756.6k, avg=68986.04, stdev=433887.54 00:33:53.158 clat (usec): min=1198, max=34607, avg=9020.12, stdev=4583.86 00:33:53.158 lat (usec): min=1208, max=34609, avg=9089.11, stdev=4608.60 00:33:53.158 clat percentiles (usec): 00:33:53.158 | 1.00th=[ 3916], 5.00th=[ 4490], 10.00th=[ 5145], 20.00th=[ 5997], 00:33:53.158 | 30.00th=[ 6521], 40.00th=[ 6783], 50.00th=[ 7439], 60.00th=[ 8848], 00:33:53.158 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[15139], 95.00th=[17695], 00:33:53.158 | 99.00th=[30540], 99.50th=[33817], 99.90th=[34341], 99.95th=[34866], 00:33:53.158 | 99.99th=[34866] 00:33:53.158 bw ( KiB/s): min=26160, max=31184, per=32.99%, avg=28672.00, stdev=3552.50, samples=2 00:33:53.158 iops : min= 6540, max= 7796, avg=7168.00, stdev=888.13, samples=2 00:33:53.158 lat (msec) : 2=0.01%, 4=0.97%, 10=68.51%, 20=29.19%, 50=1.32% 00:33:53.158 cpu : usr=4.29%, sys=7.28%, ctx=548, majf=0, minf=2 00:33:53.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:33:53.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:53.158 issued rwts: total=6892,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:53.158 job3: (groupid=0, jobs=1): err= 0: pid=3662658: Mon Oct 14 14:47:33 2024 00:33:53.158 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:33:53.158 slat (nsec): min=1004, max=21294k, avg=112060.07, stdev=843852.87 00:33:53.158 clat (usec): min=2612, max=52322, avg=15002.04, stdev=9951.99 00:33:53.158 lat (usec): min=2634, max=52329, avg=15114.10, stdev=10028.76 00:33:53.158 clat percentiles (usec): 00:33:53.158 | 1.00th=[ 3261], 5.00th=[ 4817], 10.00th=[ 6456], 20.00th=[ 7242], 00:33:53.158 | 30.00th=[ 9241], 40.00th=[10683], 50.00th=[11994], 60.00th=[14484], 00:33:53.158 | 70.00th=[16057], 80.00th=[19006], 90.00th=[30802], 95.00th=[40109], 00:33:53.158 | 99.00th=[47973], 99.50th=[49546], 99.90th=[49546], 99.95th=[50594], 00:33:53.158 | 99.99th=[52167] 00:33:53.158 write: IOPS=4462, BW=17.4MiB/s (18.3MB/s)(17.6MiB/1007msec); 0 zone resets 00:33:53.158 slat (nsec): min=1645, max=12243k, avg=100642.46, stdev=704248.68 00:33:53.158 clat (usec): min=1182, max=49282, avg=14693.29, stdev=9358.88 00:33:53.158 lat (usec): min=1193, max=49290, avg=14793.93, stdev=9419.34 00:33:53.158 clat percentiles (usec): 00:33:53.158 | 1.00th=[ 4293], 5.00th=[ 5014], 10.00th=[ 5604], 20.00th=[ 7177], 00:33:53.158 | 30.00th=[ 8717], 40.00th=[ 9634], 50.00th=[11207], 60.00th=[13173], 00:33:53.158 | 70.00th=[17171], 80.00th=[23462], 90.00th=[29230], 95.00th=[33817], 00:33:53.158 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44303], 99.95th=[44827], 00:33:53.158 | 99.99th=[49021] 00:33:53.158 bw ( KiB/s): min=11248, max=23688, per=20.10%, avg=17468.00, stdev=8796.41, samples=2 00:33:53.158 iops : min= 2812, max= 5922, avg=4367.00, stdev=2199.10, samples=2 00:33:53.158 lat (msec) : 2=0.03%, 4=1.40%, 10=38.66%, 20=38.66%, 50=21.21% 00:33:53.158 lat (msec) : 100=0.03% 00:33:53.158 cpu : usr=2.49%, sys=5.77%, ctx=314, majf=0, minf=1 00:33:53.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:33:53.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:53.158 issued rwts: total=4096,4494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:53.158 00:33:53.158 Run status group 0 (all jobs): 00:33:53.158 READ: bw=81.8MiB/s (85.8MB/s), 15.9MiB/s-26.8MiB/s (16.7MB/s-28.1MB/s), io=82.5MiB (86.5MB), run=1004-1008msec 00:33:53.158 WRITE: bw=84.9MiB/s (89.0MB/s), 17.4MiB/s-27.9MiB/s (18.3MB/s-29.2MB/s), io=85.6MiB (89.7MB), run=1004-1008msec 00:33:53.158 00:33:53.158 Disk stats (read/write): 00:33:53.158 nvme0n1: ios=4329/4608, merge=0/0, ticks=41263/61109, in_queue=102372, util=87.37% 00:33:53.158 nvme0n2: ios=3420/3584, merge=0/0, ticks=28360/30297, in_queue=58657, util=89.91% 00:33:53.158 nvme0n3: ios=5735/6144, merge=0/0, ticks=48991/51817, in_queue=100808, util=93.14% 00:33:53.158 nvme0n4: ios=3641/3912, merge=0/0, ticks=40101/47160, in_queue=87261, util=93.81% 00:33:53.158 14:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:53.158 14:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3662963 00:33:53.158 14:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:53.158 14:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:53.158 [global] 00:33:53.158 thread=1 00:33:53.158 invalidate=1 00:33:53.158 rw=read 00:33:53.158 time_based=1 00:33:53.158 runtime=10 00:33:53.158 ioengine=libaio 00:33:53.158 direct=1 00:33:53.158 bs=4096 00:33:53.158 iodepth=1 00:33:53.158 norandommap=1 00:33:53.158 numjobs=1 00:33:53.158 00:33:53.158 [job0] 00:33:53.158 filename=/dev/nvme0n1 00:33:53.158 [job1] 00:33:53.158 filename=/dev/nvme0n2 00:33:53.158 [job2] 00:33:53.158 filename=/dev/nvme0n3 00:33:53.158 [job3] 00:33:53.158 filename=/dev/nvme0n4 00:33:53.158 Could not set queue depth (nvme0n1) 00:33:53.158 Could not set queue depth (nvme0n2) 00:33:53.158 Could not set queue depth (nvme0n3) 00:33:53.158 Could not set queue depth (nvme0n4) 00:33:53.728 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.728 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.728 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.728 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:53.728 fio-3.35 00:33:53.728 Starting 4 threads 00:33:56.278 14:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:56.278 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=12144640, buflen=4096 00:33:56.278 fio: pid=3663177, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:56.278 14:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:56.540 14:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:56.540 14:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:56.540 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10960896, buflen=4096 00:33:56.540 fio: pid=3663176, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:56.802 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=9064448, buflen=4096 00:33:56.802 fio: pid=3663173, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:56.802 14:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:56.802 14:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:56.802 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5111808, buflen=4096 00:33:56.802 fio: pid=3663175, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:56.802 14:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:56.802 14:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:57.064 00:33:57.064 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3663173: Mon Oct 14 14:47:37 2024 00:33:57.064 read: IOPS=744, BW=2977KiB/s (3049kB/s)(8852KiB/2973msec) 00:33:57.064 slat (usec): min=2, max=25759, avg=57.41, stdev=781.64 00:33:57.064 clat (usec): min=797, max=1880, avg=1266.88, stdev=111.73 00:33:57.064 lat (usec): min=822, max=27075, avg=1324.30, stdev=789.42 00:33:57.064 clat percentiles (usec): 00:33:57.064 | 1.00th=[ 955], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1172], 00:33:57.064 | 30.00th=[ 1221], 40.00th=[ 1254], 50.00th=[ 1270], 60.00th=[ 1303], 00:33:57.064 | 70.00th=[ 1336], 80.00th=[ 1352], 90.00th=[ 1401], 95.00th=[ 1434], 00:33:57.064 | 99.00th=[ 1500], 99.50th=[ 1532], 99.90th=[ 1582], 99.95th=[ 1631], 00:33:57.064 | 99.99th=[ 1876] 00:33:57.064 bw ( KiB/s): min= 3032, max= 3120, per=26.63%, avg=3072.00, stdev=39.19, samples=5 00:33:57.064 iops : min= 758, max= 780, avg=768.00, stdev= 9.80, samples=5 00:33:57.064 lat (usec) : 1000=1.58% 00:33:57.064 lat (msec) : 2=98.37% 00:33:57.064 cpu : usr=0.77%, sys=2.25%, ctx=2218, majf=0, minf=2 00:33:57.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.064 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.064 issued rwts: total=2214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.064 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3663175: Mon Oct 14 14:47:37 2024 00:33:57.064 read: IOPS=395, BW=1582KiB/s (1620kB/s)(4992KiB/3156msec) 00:33:57.064 slat (usec): min=6, max=26575, avg=115.47, stdev=1328.82 00:33:57.064 clat (usec): min=663, max=41064, avg=2389.07, stdev=7204.38 00:33:57.064 lat (usec): min=690, max=41090, avg=2504.62, stdev=7310.00 00:33:57.064 clat percentiles (usec): 00:33:57.064 | 1.00th=[ 766], 5.00th=[ 873], 10.00th=[ 922], 20.00th=[ 971], 00:33:57.064 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:33:57.064 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1287], 00:33:57.064 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:57.064 | 99.99th=[41157] 00:33:57.064 bw ( KiB/s): min= 96, max= 3704, per=13.25%, avg=1529.83, stdev=1619.74, samples=6 00:33:57.064 iops : min= 24, max= 926, avg=382.33, stdev=404.84, samples=6 00:33:57.064 lat (usec) : 750=0.56%, 1000=31.55% 00:33:57.064 lat (msec) : 2=63.81%, 4=0.64%, 50=3.36% 00:33:57.064 cpu : usr=0.67%, sys=1.62%, ctx=1255, majf=0, minf=1 00:33:57.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.064 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.064 issued rwts: total=1249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.064 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3663176: Mon Oct 14 14:47:37 2024 00:33:57.064 read: IOPS=954, BW=3816KiB/s (3908kB/s)(10.5MiB/2805msec) 00:33:57.064 slat (usec): min=7, max=19061, avg=38.69, stdev=437.12 00:33:57.064 clat (usec): min=380, max=42466, avg=994.82, stdev=1384.83 00:33:57.064 lat (usec): min=406, max=54713, avg=1033.51, stdev=1576.19 00:33:57.064 clat percentiles (usec): 00:33:57.064 | 1.00th=[ 570], 5.00th=[ 685], 10.00th=[ 734], 20.00th=[ 816], 00:33:57.064 | 30.00th=[ 865], 40.00th=[ 914], 50.00th=[ 955], 60.00th=[ 996], 00:33:57.064 | 70.00th=[ 1037], 80.00th=[ 1090], 90.00th=[ 1156], 95.00th=[ 1205], 00:33:57.064 | 99.00th=[ 1303], 99.50th=[ 1336], 99.90th=[41681], 99.95th=[41681], 00:33:57.064 | 99.99th=[42206] 00:33:57.064 bw ( KiB/s): min= 3728, max= 4288, per=34.80%, avg=4014.40, stdev=206.95, samples=5 00:33:57.064 iops : min= 932, max= 1072, avg=1003.60, stdev=51.74, samples=5 00:33:57.064 lat (usec) : 500=0.11%, 750=11.65%, 1000=49.72% 00:33:57.064 lat (msec) : 2=38.36%, 50=0.11% 00:33:57.064 cpu : usr=1.03%, sys=3.00%, ctx=2682, majf=0, minf=2 00:33:57.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.064 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.064 issued rwts: total=2677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.064 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3663177: Mon Oct 14 14:47:37 2024 00:33:57.064 read: IOPS=1142, BW=4569KiB/s (4678kB/s)(11.6MiB/2596msec) 00:33:57.064 slat (nsec): min=6863, max=62015, avg=24934.62, stdev=5658.72 00:33:57.064 clat (usec): min=312, max=41951, avg=835.39, stdev=1456.65 00:33:57.064 lat (usec): min=320, max=41961, avg=860.33, stdev=1456.59 00:33:57.064 clat percentiles (usec): 00:33:57.064 | 1.00th=[ 461], 5.00th=[ 562], 10.00th=[ 603], 20.00th=[ 668], 00:33:57.064 | 30.00th=[ 717], 40.00th=[ 758], 50.00th=[ 799], 60.00th=[ 832], 00:33:57.064 | 70.00th=[ 865], 80.00th=[ 898], 90.00th=[ 938], 95.00th=[ 971], 00:33:57.064 | 99.00th=[ 1045], 99.50th=[ 1074], 99.90th=[41681], 99.95th=[41681], 00:33:57.064 | 99.99th=[42206] 00:33:57.064 bw ( KiB/s): min= 3640, max= 5016, per=39.97%, avg=4611.20, stdev=571.47, samples=5 00:33:57.064 iops : min= 910, max= 1254, avg=1152.80, stdev=142.87, samples=5 00:33:57.064 lat (usec) : 500=2.06%, 750=36.21%, 1000=59.10% 00:33:57.064 lat (msec) : 2=2.46%, 50=0.13% 00:33:57.064 cpu : usr=1.16%, sys=3.39%, ctx=2966, majf=0, minf=2 00:33:57.064 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:57.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.064 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:57.064 issued rwts: total=2966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:57.064 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:57.064 00:33:57.064 Run status group 0 (all jobs): 00:33:57.064 READ: bw=11.3MiB/s (11.8MB/s), 1582KiB/s-4569KiB/s (1620kB/s-4678kB/s), io=35.6MiB (37.3MB), run=2596-3156msec 00:33:57.064 00:33:57.064 Disk stats (read/write): 00:33:57.064 nvme0n1: ios=2132/0, merge=0/0, ticks=2649/0, in_queue=2649, util=92.75% 00:33:57.064 nvme0n2: ios=1202/0, merge=0/0, ticks=2855/0, in_queue=2855, util=92.35% 00:33:57.064 nvme0n3: ios=2628/0, merge=0/0, ticks=2548/0, in_queue=2548, util=98.70% 00:33:57.064 nvme0n4: ios=2966/0, merge=0/0, ticks=2405/0, in_queue=2405, util=96.12% 00:33:57.064 14:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:57.064 14:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:57.332 14:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:57.332 14:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:57.593 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:57.593 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:57.593 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:57.593 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:57.855 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:57.855 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3662963 00:33:57.855 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:57.855 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:57.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:57.855 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:57.855 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:33:57.855 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:57.855 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:57.855 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:57.855 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:57.855 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:33:57.855 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:57.855 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:57.855 nvmf hotplug test: fio failed as expected 00:33:57.855 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:58.117 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:58.117 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:58.117 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:58.117 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:58.117 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:58.117 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:58.117 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:58.117 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:58.117 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:58.118 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:58.118 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:58.118 rmmod nvme_tcp 00:33:58.118 rmmod nvme_fabrics 00:33:58.118 rmmod nvme_keyring 00:33:58.118 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:58.118 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:58.118 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:58.118 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 3659720 ']' 00:33:58.118 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 3659720 00:33:58.118 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3659720 ']' 00:33:58.118 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3659720 00:33:58.118 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:33:58.118 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:58.118 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3659720 00:33:58.380 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:58.380 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:58.380 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3659720' 00:33:58.380 killing process with pid 3659720 00:33:58.380 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3659720 00:33:58.380 14:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3659720 00:33:58.380 14:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:58.380 14:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:58.380 14:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:58.380 14:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:58.380 14:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:33:58.380 14:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:58.380 14:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:33:58.380 14:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:58.380 14:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:58.380 14:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.380 14:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.380 14:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:00.937 00:34:00.937 real 0m28.012s 00:34:00.937 user 2m15.908s 00:34:00.937 sys 0m12.543s 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:00.937 ************************************ 00:34:00.937 END TEST nvmf_fio_target 00:34:00.937 ************************************ 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:00.937 ************************************ 00:34:00.937 START TEST nvmf_bdevio 00:34:00.937 ************************************ 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:00.937 * Looking for test storage... 00:34:00.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:00.937 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.938 --rc genhtml_branch_coverage=1 00:34:00.938 --rc genhtml_function_coverage=1 00:34:00.938 --rc genhtml_legend=1 00:34:00.938 --rc geninfo_all_blocks=1 00:34:00.938 --rc geninfo_unexecuted_blocks=1 00:34:00.938 00:34:00.938 ' 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.938 --rc genhtml_branch_coverage=1 00:34:00.938 --rc genhtml_function_coverage=1 00:34:00.938 --rc genhtml_legend=1 00:34:00.938 --rc geninfo_all_blocks=1 00:34:00.938 --rc geninfo_unexecuted_blocks=1 00:34:00.938 00:34:00.938 ' 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.938 --rc genhtml_branch_coverage=1 00:34:00.938 --rc genhtml_function_coverage=1 00:34:00.938 --rc genhtml_legend=1 00:34:00.938 --rc geninfo_all_blocks=1 00:34:00.938 --rc geninfo_unexecuted_blocks=1 00:34:00.938 00:34:00.938 ' 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:00.938 --rc genhtml_branch_coverage=1 00:34:00.938 --rc genhtml_function_coverage=1 00:34:00.938 --rc genhtml_legend=1 00:34:00.938 --rc geninfo_all_blocks=1 00:34:00.938 --rc geninfo_unexecuted_blocks=1 00:34:00.938 00:34:00.938 ' 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:00.938 14:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:09.084 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:09.084 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:09.084 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:09.085 Found net devices under 0000:31:00.0: cvl_0_0 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:09.085 Found net devices under 0000:31:00.1: cvl_0_1 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:09.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:34:09.085 00:34:09.085 --- 10.0.0.2 ping statistics --- 00:34:09.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.085 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:09.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:34:09.085 00:34:09.085 --- 10.0.0.1 ping statistics --- 00:34:09.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.085 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=3668256 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 3668256 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3668256 ']' 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:09.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:09.085 14:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:09.085 [2024-10-14 14:47:48.940144] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:09.085 [2024-10-14 14:47:48.941304] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:34:09.085 [2024-10-14 14:47:48.941355] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:09.085 [2024-10-14 14:47:49.032514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:09.085 [2024-10-14 14:47:49.082498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:09.085 [2024-10-14 14:47:49.082546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:09.085 [2024-10-14 14:47:49.082554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:09.085 [2024-10-14 14:47:49.082561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:09.085 [2024-10-14 14:47:49.082567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:09.085 [2024-10-14 14:47:49.084955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:09.085 [2024-10-14 14:47:49.085160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:09.085 [2024-10-14 14:47:49.085471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:09.085 [2024-10-14 14:47:49.085473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:09.085 [2024-10-14 14:47:49.168662] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:09.085 [2024-10-14 14:47:49.169477] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:09.085 [2024-10-14 14:47:49.170026] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:09.085 [2024-10-14 14:47:49.170457] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:09.085 [2024-10-14 14:47:49.170505] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:09.085 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:09.085 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:34:09.085 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:09.085 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:09.085 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:09.085 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:09.085 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:09.085 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.085 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:09.085 [2024-10-14 14:47:49.770500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:09.085 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.085 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:09.085 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.085 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:09.347 Malloc0 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:09.347 [2024-10-14 14:47:49.842745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:09.347 { 00:34:09.347 "params": { 00:34:09.347 "name": "Nvme$subsystem", 00:34:09.347 "trtype": "$TEST_TRANSPORT", 00:34:09.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:09.347 "adrfam": "ipv4", 00:34:09.347 "trsvcid": "$NVMF_PORT", 00:34:09.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:09.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:09.347 "hdgst": ${hdgst:-false}, 00:34:09.347 "ddgst": ${ddgst:-false} 00:34:09.347 }, 00:34:09.347 "method": "bdev_nvme_attach_controller" 00:34:09.347 } 00:34:09.347 EOF 00:34:09.347 )") 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:34:09.347 14:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:09.347 "params": { 00:34:09.347 "name": "Nvme1", 00:34:09.347 "trtype": "tcp", 00:34:09.347 "traddr": "10.0.0.2", 00:34:09.347 "adrfam": "ipv4", 00:34:09.347 "trsvcid": "4420", 00:34:09.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:09.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:09.347 "hdgst": false, 00:34:09.347 "ddgst": false 00:34:09.347 }, 00:34:09.347 "method": "bdev_nvme_attach_controller" 00:34:09.347 }' 00:34:09.347 [2024-10-14 14:47:49.895907] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:34:09.347 [2024-10-14 14:47:49.895968] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668453 ] 00:34:09.347 [2024-10-14 14:47:49.962067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:09.347 [2024-10-14 14:47:50.004141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:09.347 [2024-10-14 14:47:50.004177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:09.347 [2024-10-14 14:47:50.004179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.608 I/O targets: 00:34:09.608 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:09.608 00:34:09.608 00:34:09.608 CUnit - A unit testing framework for C - Version 2.1-3 00:34:09.608 http://cunit.sourceforge.net/ 00:34:09.608 00:34:09.608 00:34:09.608 Suite: bdevio tests on: Nvme1n1 00:34:09.608 Test: blockdev write read block ...passed 00:34:09.869 Test: blockdev write zeroes read block ...passed 00:34:09.869 Test: blockdev write zeroes read no split ...passed 00:34:09.869 Test: blockdev write zeroes read split ...passed 00:34:09.869 Test: blockdev write zeroes read split partial ...passed 00:34:09.869 Test: blockdev reset ...[2024-10-14 14:47:50.469515] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.869 [2024-10-14 14:47:50.469580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcbc000 (9): Bad file descriptor 00:34:09.869 [2024-10-14 14:47:50.564809] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:09.869 passed 00:34:09.869 Test: blockdev write read 8 blocks ...passed 00:34:09.869 Test: blockdev write read size > 128k ...passed 00:34:09.869 Test: blockdev write read invalid size ...passed 00:34:10.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:10.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:10.130 Test: blockdev write read max offset ...passed 00:34:10.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:10.130 Test: blockdev writev readv 8 blocks ...passed 00:34:10.130 Test: blockdev writev readv 30 x 1block ...passed 00:34:10.130 Test: blockdev writev readv block ...passed 00:34:10.130 Test: blockdev writev readv size > 128k ...passed 00:34:10.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:10.130 Test: blockdev comparev and writev ...[2024-10-14 14:47:50.747629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:10.130 [2024-10-14 14:47:50.747655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:10.130 [2024-10-14 14:47:50.747666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:10.130 [2024-10-14 14:47:50.747673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.130 [2024-10-14 14:47:50.748136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:10.131 [2024-10-14 14:47:50.748144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:10.131 [2024-10-14 14:47:50.748154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:10.131 [2024-10-14 14:47:50.748160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:10.131 [2024-10-14 14:47:50.748606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:10.131 [2024-10-14 14:47:50.748613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:10.131 [2024-10-14 14:47:50.748623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:10.131 [2024-10-14 14:47:50.748628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:10.131 [2024-10-14 14:47:50.749130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:10.131 [2024-10-14 14:47:50.749138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:10.131 [2024-10-14 14:47:50.749147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:10.131 [2024-10-14 14:47:50.749153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:10.131 passed 00:34:10.131 Test: blockdev nvme passthru rw ...passed 00:34:10.131 Test: blockdev nvme passthru vendor specific ...[2024-10-14 14:47:50.834936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:10.131 [2024-10-14 14:47:50.834947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:10.131 [2024-10-14 14:47:50.835354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:10.131 [2024-10-14 14:47:50.835361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:10.131 [2024-10-14 14:47:50.835709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:10.131 [2024-10-14 14:47:50.835716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:10.131 [2024-10-14 14:47:50.835986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:10.131 [2024-10-14 14:47:50.835992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:10.131 passed 00:34:10.131 Test: blockdev nvme admin passthru ...passed 00:34:10.392 Test: blockdev copy ...passed 00:34:10.392 00:34:10.393 Run Summary: Type Total Ran Passed Failed Inactive 00:34:10.393 suites 1 1 n/a 0 0 00:34:10.393 tests 23 23 23 0 0 00:34:10.393 asserts 152 152 152 0 n/a 00:34:10.393 00:34:10.393 Elapsed time = 1.266 seconds 00:34:10.393 14:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:10.393 14:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.393 14:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.393 rmmod nvme_tcp 00:34:10.393 rmmod nvme_fabrics 00:34:10.393 rmmod nvme_keyring 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 3668256 ']' 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 3668256 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3668256 ']' 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3668256 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:10.393 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3668256 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3668256' 00:34:10.654 killing process with pid 3668256 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3668256 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3668256 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:10.654 14:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.201 14:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:13.201 00:34:13.201 real 0m12.244s 00:34:13.201 user 0m9.761s 00:34:13.201 sys 0m6.594s 00:34:13.201 14:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:13.201 14:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:13.201 ************************************ 00:34:13.201 END TEST nvmf_bdevio 00:34:13.201 ************************************ 00:34:13.201 14:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:13.201 00:34:13.201 real 4m57.159s 00:34:13.201 user 10m11.949s 00:34:13.201 sys 2m5.084s 00:34:13.201 14:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:13.201 14:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:13.201 ************************************ 00:34:13.201 END TEST nvmf_target_core_interrupt_mode 00:34:13.201 ************************************ 00:34:13.202 14:47:53 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:13.202 14:47:53 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:13.202 14:47:53 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:13.202 14:47:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:13.202 ************************************ 00:34:13.202 START TEST nvmf_interrupt 00:34:13.202 ************************************ 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:13.202 * Looking for test storage... 00:34:13.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.202 --rc genhtml_branch_coverage=1 00:34:13.202 --rc genhtml_function_coverage=1 00:34:13.202 --rc genhtml_legend=1 00:34:13.202 --rc geninfo_all_blocks=1 00:34:13.202 --rc geninfo_unexecuted_blocks=1 00:34:13.202 00:34:13.202 ' 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.202 --rc genhtml_branch_coverage=1 00:34:13.202 --rc genhtml_function_coverage=1 00:34:13.202 --rc genhtml_legend=1 00:34:13.202 --rc geninfo_all_blocks=1 00:34:13.202 --rc geninfo_unexecuted_blocks=1 00:34:13.202 00:34:13.202 ' 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.202 --rc genhtml_branch_coverage=1 00:34:13.202 --rc genhtml_function_coverage=1 00:34:13.202 --rc genhtml_legend=1 00:34:13.202 --rc geninfo_all_blocks=1 00:34:13.202 --rc geninfo_unexecuted_blocks=1 00:34:13.202 00:34:13.202 ' 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:13.202 --rc genhtml_branch_coverage=1 00:34:13.202 --rc genhtml_function_coverage=1 00:34:13.202 --rc genhtml_legend=1 00:34:13.202 --rc geninfo_all_blocks=1 00:34:13.202 --rc geninfo_unexecuted_blocks=1 00:34:13.202 00:34:13.202 ' 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:13.202 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:13.203 14:47:53 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:21.432 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:21.433 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:21.433 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:21.433 Found net devices under 0000:31:00.0: cvl_0_0 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:21.433 Found net devices under 0000:31:00.1: cvl_0_1 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:21.433 14:48:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:21.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:21.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:34:21.433 00:34:21.433 --- 10.0.0.2 ping statistics --- 00:34:21.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.433 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:21.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:21.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:34:21.433 00:34:21.433 --- 10.0.0.1 ping statistics --- 00:34:21.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:21.433 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=3673054 00:34:21.433 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 3673054 00:34:21.434 14:48:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:21.434 14:48:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3673054 ']' 00:34:21.434 14:48:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:21.434 14:48:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:21.434 14:48:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:21.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:21.434 14:48:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:21.434 14:48:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:21.434 [2024-10-14 14:48:01.291907] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:21.434 [2024-10-14 14:48:01.293075] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:34:21.434 [2024-10-14 14:48:01.293130] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:21.434 [2024-10-14 14:48:01.367196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:21.434 [2024-10-14 14:48:01.409409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:21.434 [2024-10-14 14:48:01.409447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:21.434 [2024-10-14 14:48:01.409456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:21.434 [2024-10-14 14:48:01.409463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:21.434 [2024-10-14 14:48:01.409469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:21.434 [2024-10-14 14:48:01.410843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:21.434 [2024-10-14 14:48:01.410846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.434 [2024-10-14 14:48:01.467020] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:21.434 [2024-10-14 14:48:01.467482] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:21.434 [2024-10-14 14:48:01.467837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:21.434 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:21.434 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:34:21.434 14:48:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:21.434 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:21.434 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:21.434 14:48:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:21.434 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:21.434 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:21.434 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:21.434 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:21.771 5000+0 records in 00:34:21.771 5000+0 records out 00:34:21.771 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0173248 s, 591 MB/s 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:21.771 AIO0 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:21.771 [2024-10-14 14:48:02.207450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:21.771 [2024-10-14 14:48:02.248132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3673054 0 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3673054 0 idle 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3673054 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:21.771 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3673054 -w 256 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3673054 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0' 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3673054 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3673054 1 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3673054 1 idle 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3673054 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3673054 -w 256 00:34:21.772 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3673093 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3673093 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3673316 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3673054 0 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3673054 0 busy 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3673054 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3673054 -w 256 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3673054 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.46 reactor_0' 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3673054 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.46 reactor_0 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:22.110 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3673054 1 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3673054 1 busy 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3673054 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3673054 -w 256 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3673093 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:00.30 reactor_1' 00:34:22.399 14:48:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3673093 root 20 0 128.2g 44928 32256 R 93.8 0.0 0:00.30 reactor_1 00:34:22.399 14:48:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:22.399 14:48:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:22.399 14:48:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:34:22.399 14:48:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:34:22.399 14:48:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:22.399 14:48:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:22.399 14:48:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:22.399 14:48:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:22.399 14:48:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3673316 00:34:32.403 Initializing NVMe Controllers 00:34:32.403 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:32.403 Controller IO queue size 256, less than required. 00:34:32.403 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:32.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:32.403 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:32.403 Initialization complete. Launching workers. 00:34:32.403 ======================================================== 00:34:32.403 Latency(us) 00:34:32.403 Device Information : IOPS MiB/s Average min max 00:34:32.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16450.41 64.26 15573.53 2939.83 57171.83 00:34:32.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 18156.20 70.92 14101.26 7861.99 30891.48 00:34:32.404 ======================================================== 00:34:32.404 Total : 34606.61 135.18 14801.11 2939.83 57171.83 00:34:32.404 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3673054 0 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3673054 0 idle 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3673054 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3673054 -w 256 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3673054 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.24 reactor_0' 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3673054 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.24 reactor_0 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3673054 1 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3673054 1 idle 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3673054 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3673054 -w 256 00:34:32.404 14:48:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:32.665 14:48:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3673093 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:34:32.665 14:48:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3673093 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:34:32.665 14:48:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:32.665 14:48:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:32.665 14:48:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:32.665 14:48:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:32.665 14:48:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:32.665 14:48:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:32.665 14:48:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:32.665 14:48:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:32.665 14:48:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:33.236 14:48:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:33.236 14:48:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:34:33.236 14:48:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:34:33.236 14:48:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:34:33.236 14:48:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:34:35.151 14:48:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3673054 0 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3673054 0 idle 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3673054 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3673054 -w 256 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3673054 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.48 reactor_0' 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3673054 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.48 reactor_0 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3673054 1 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3673054 1 idle 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3673054 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3673054 -w 256 00:34:35.152 14:48:15 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:35.413 14:48:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3673093 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.13 reactor_1' 00:34:35.413 14:48:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3673093 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.13 reactor_1 00:34:35.413 14:48:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:35.413 14:48:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:35.413 14:48:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:35.413 14:48:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:35.413 14:48:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:35.413 14:48:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:35.413 14:48:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:35.413 14:48:16 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:35.413 14:48:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:35.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:35.674 rmmod nvme_tcp 00:34:35.674 rmmod nvme_fabrics 00:34:35.674 rmmod nvme_keyring 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 3673054 ']' 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 3673054 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3673054 ']' 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 3673054 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3673054 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3673054' 00:34:35.674 killing process with pid 3673054 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 3673054 00:34:35.674 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 3673054 00:34:35.934 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:35.934 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:35.934 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:35.934 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:35.934 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:34:35.934 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:34:35.934 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:35.934 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:35.934 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:35.934 14:48:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.934 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:35.934 14:48:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.845 14:48:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:37.845 00:34:37.845 real 0m24.987s 00:34:37.845 user 0m40.220s 00:34:37.845 sys 0m9.447s 00:34:37.845 14:48:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:37.845 14:48:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:37.845 ************************************ 00:34:37.845 END TEST nvmf_interrupt 00:34:37.845 ************************************ 00:34:38.106 00:34:38.106 real 29m40.399s 00:34:38.106 user 60m57.556s 00:34:38.106 sys 9m59.818s 00:34:38.106 14:48:18 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:38.106 14:48:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:38.106 ************************************ 00:34:38.106 END TEST nvmf_tcp 00:34:38.106 ************************************ 00:34:38.106 14:48:18 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:34:38.106 14:48:18 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:38.106 14:48:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:38.106 14:48:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:38.106 14:48:18 -- common/autotest_common.sh@10 -- # set +x 00:34:38.106 ************************************ 00:34:38.106 START TEST spdkcli_nvmf_tcp 00:34:38.106 ************************************ 00:34:38.106 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:38.106 * Looking for test storage... 00:34:38.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:38.106 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:38.106 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:34:38.106 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:38.367 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:38.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.368 --rc genhtml_branch_coverage=1 00:34:38.368 --rc genhtml_function_coverage=1 00:34:38.368 --rc genhtml_legend=1 00:34:38.368 --rc geninfo_all_blocks=1 00:34:38.368 --rc geninfo_unexecuted_blocks=1 00:34:38.368 00:34:38.368 ' 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:38.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.368 --rc genhtml_branch_coverage=1 00:34:38.368 --rc genhtml_function_coverage=1 00:34:38.368 --rc genhtml_legend=1 00:34:38.368 --rc geninfo_all_blocks=1 00:34:38.368 --rc geninfo_unexecuted_blocks=1 00:34:38.368 00:34:38.368 ' 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:38.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.368 --rc genhtml_branch_coverage=1 00:34:38.368 --rc genhtml_function_coverage=1 00:34:38.368 --rc genhtml_legend=1 00:34:38.368 --rc geninfo_all_blocks=1 00:34:38.368 --rc geninfo_unexecuted_blocks=1 00:34:38.368 00:34:38.368 ' 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:38.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:38.368 --rc genhtml_branch_coverage=1 00:34:38.368 --rc genhtml_function_coverage=1 00:34:38.368 --rc genhtml_legend=1 00:34:38.368 --rc geninfo_all_blocks=1 00:34:38.368 --rc geninfo_unexecuted_blocks=1 00:34:38.368 00:34:38.368 ' 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:38.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3676978 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3676978 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3676978 ']' 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:38.368 14:48:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:38.368 [2024-10-14 14:48:18.962409] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:34:38.368 [2024-10-14 14:48:18.962471] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3676978 ] 00:34:38.368 [2024-10-14 14:48:19.025539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:38.368 [2024-10-14 14:48:19.065734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.368 [2024-10-14 14:48:19.065737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:39.308 14:48:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:39.308 14:48:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:34:39.308 14:48:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:39.309 14:48:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:39.309 14:48:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.309 14:48:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:39.309 14:48:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:39.309 14:48:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:39.309 14:48:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:39.309 14:48:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.309 14:48:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:39.309 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:39.309 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:39.309 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:39.309 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:39.309 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:39.309 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:39.309 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:39.309 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:39.309 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:39.309 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:39.309 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:39.309 ' 00:34:41.852 [2024-10-14 14:48:22.224162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:42.792 [2024-10-14 14:48:23.432182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:45.335 [2024-10-14 14:48:25.650821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:47.246 [2024-10-14 14:48:27.556617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:48.629 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:48.629 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:48.629 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:48.629 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:48.629 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:48.629 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:48.629 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:48.629 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:48.629 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:48.629 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:48.629 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:48.629 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:48.629 14:48:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:48.629 14:48:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:48.629 14:48:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.629 14:48:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:48.629 14:48:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:48.629 14:48:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.629 14:48:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:48.629 14:48:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:48.890 14:48:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:48.891 14:48:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:48.891 14:48:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:48.891 14:48:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:48.891 14:48:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:49.151 14:48:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:49.151 14:48:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:49.151 14:48:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:49.151 14:48:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:49.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:49.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:49.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:49.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:49.151 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:49.151 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:49.151 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:49.151 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:49.151 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:49.151 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:49.151 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:49.151 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:49.151 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:49.151 ' 00:34:54.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:54.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:54.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:54.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:54.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:54.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:54.438 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:54.438 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:54.438 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:54.438 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:54.438 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:54.438 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:54.438 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:54.438 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3676978 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3676978 ']' 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3676978 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3676978 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3676978' 00:34:54.438 killing process with pid 3676978 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3676978 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3676978 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3676978 ']' 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3676978 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3676978 ']' 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3676978 00:34:54.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3676978) - No such process 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3676978 is not found' 00:34:54.438 Process with pid 3676978 is not found 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:54.438 00:34:54.438 real 0m16.249s 00:34:54.438 user 0m33.649s 00:34:54.438 sys 0m0.725s 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:54.438 14:48:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.438 ************************************ 00:34:54.438 END TEST spdkcli_nvmf_tcp 00:34:54.438 ************************************ 00:34:54.438 14:48:34 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:54.438 14:48:34 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:54.438 14:48:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:54.438 14:48:34 -- common/autotest_common.sh@10 -- # set +x 00:34:54.438 ************************************ 00:34:54.438 START TEST nvmf_identify_passthru 00:34:54.438 ************************************ 00:34:54.438 14:48:34 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:54.438 * Looking for test storage... 00:34:54.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:54.438 14:48:35 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:54.438 14:48:35 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:54.438 14:48:35 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:34:54.438 14:48:35 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:54.438 14:48:35 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:54.438 14:48:35 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:54.438 14:48:35 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:54.438 14:48:35 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:54.438 14:48:35 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:54.438 14:48:35 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:54.438 14:48:35 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:54.438 14:48:35 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:54.438 14:48:35 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:54.439 14:48:35 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:54.439 14:48:35 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:54.700 14:48:35 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:54.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.700 --rc genhtml_branch_coverage=1 00:34:54.700 --rc genhtml_function_coverage=1 00:34:54.700 --rc genhtml_legend=1 00:34:54.700 --rc geninfo_all_blocks=1 00:34:54.700 --rc geninfo_unexecuted_blocks=1 00:34:54.700 00:34:54.700 ' 00:34:54.700 14:48:35 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:54.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.700 --rc genhtml_branch_coverage=1 00:34:54.700 --rc genhtml_function_coverage=1 00:34:54.700 --rc genhtml_legend=1 00:34:54.700 --rc geninfo_all_blocks=1 00:34:54.700 --rc geninfo_unexecuted_blocks=1 00:34:54.700 00:34:54.700 ' 00:34:54.700 14:48:35 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:54.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.700 --rc genhtml_branch_coverage=1 00:34:54.700 --rc genhtml_function_coverage=1 00:34:54.700 --rc genhtml_legend=1 00:34:54.700 --rc geninfo_all_blocks=1 00:34:54.700 --rc geninfo_unexecuted_blocks=1 00:34:54.700 00:34:54.700 ' 00:34:54.700 14:48:35 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:54.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.700 --rc genhtml_branch_coverage=1 00:34:54.700 --rc genhtml_function_coverage=1 00:34:54.700 --rc genhtml_legend=1 00:34:54.700 --rc geninfo_all_blocks=1 00:34:54.700 --rc geninfo_unexecuted_blocks=1 00:34:54.700 00:34:54.700 ' 00:34:54.700 14:48:35 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:54.700 14:48:35 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:54.700 14:48:35 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:54.700 14:48:35 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:54.700 14:48:35 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:54.700 14:48:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.700 14:48:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.700 14:48:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.700 14:48:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:54.700 14:48:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:54.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:54.700 14:48:35 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:54.700 14:48:35 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:54.700 14:48:35 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:54.700 14:48:35 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:54.700 14:48:35 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:54.700 14:48:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.700 14:48:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.700 14:48:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.700 14:48:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:54.700 14:48:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.700 14:48:35 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:54.700 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:54.701 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:54.701 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:54.701 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:54.701 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:54.701 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.701 14:48:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:54.701 14:48:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.701 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:54.701 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:54.701 14:48:35 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:34:54.701 14:48:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.844 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:02.844 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:02.844 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:02.844 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:02.844 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:02.844 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:02.844 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:02.844 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:02.844 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:02.844 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:02.844 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:02.844 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:02.845 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:02.845 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:02.845 Found net devices under 0000:31:00.0: cvl_0_0 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:02.845 Found net devices under 0000:31:00.1: cvl_0_1 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:02.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:02.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:35:02.845 00:35:02.845 --- 10.0.0.2 ping statistics --- 00:35:02.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.845 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:02.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:02.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:35:02.845 00:35:02.845 --- 10.0.0.1 ping statistics --- 00:35:02.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:02.845 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:02.845 14:48:42 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:02.845 14:48:42 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:02.845 14:48:42 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:02.845 14:48:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.845 14:48:42 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:02.845 14:48:42 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:02.845 14:48:42 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:02.845 14:48:42 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:02.845 14:48:42 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:02.845 14:48:42 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:02.845 14:48:42 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:02.845 14:48:42 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:02.845 14:48:42 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:02.845 14:48:42 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:02.845 14:48:42 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:02.845 14:48:42 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:02.845 14:48:42 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:02.845 14:48:42 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:02.845 14:48:42 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:02.845 14:48:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:02.845 14:48:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:02.845 14:48:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:02.845 14:48:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:35:02.846 14:48:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:02.846 14:48:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:02.846 14:48:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:03.417 14:48:43 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:03.417 14:48:43 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:03.417 14:48:43 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:03.417 14:48:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:03.417 14:48:43 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:03.417 14:48:43 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:03.417 14:48:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:03.417 14:48:43 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3684006 00:35:03.417 14:48:43 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:03.417 14:48:43 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:03.417 14:48:43 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3684006 00:35:03.417 14:48:43 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3684006 ']' 00:35:03.417 14:48:43 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.417 14:48:43 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:03.417 14:48:43 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.417 14:48:43 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:03.417 14:48:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:03.417 [2024-10-14 14:48:43.953025] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:35:03.417 [2024-10-14 14:48:43.953091] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.417 [2024-10-14 14:48:44.024803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:03.417 [2024-10-14 14:48:44.065365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.417 [2024-10-14 14:48:44.065401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.417 [2024-10-14 14:48:44.065409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:03.417 [2024-10-14 14:48:44.065416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:03.417 [2024-10-14 14:48:44.065422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.417 [2024-10-14 14:48:44.067101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.417 [2024-10-14 14:48:44.067330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:03.417 [2024-10-14 14:48:44.067330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.417 [2024-10-14 14:48:44.067177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:04.358 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:04.358 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:35:04.358 14:48:44 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:04.358 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.358 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.358 INFO: Log level set to 20 00:35:04.358 INFO: Requests: 00:35:04.358 { 00:35:04.358 "jsonrpc": "2.0", 00:35:04.358 "method": "nvmf_set_config", 00:35:04.358 "id": 1, 00:35:04.358 "params": { 00:35:04.358 "admin_cmd_passthru": { 00:35:04.358 "identify_ctrlr": true 00:35:04.358 } 00:35:04.358 } 00:35:04.358 } 00:35:04.358 00:35:04.358 INFO: response: 00:35:04.358 { 00:35:04.358 "jsonrpc": "2.0", 00:35:04.358 "id": 1, 00:35:04.358 "result": true 00:35:04.358 } 00:35:04.358 00:35:04.358 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.358 14:48:44 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:04.358 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.358 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.358 INFO: Setting log level to 20 00:35:04.358 INFO: Setting log level to 20 00:35:04.358 INFO: Log level set to 20 00:35:04.358 INFO: Log level set to 20 00:35:04.358 INFO: Requests: 00:35:04.358 { 00:35:04.358 "jsonrpc": "2.0", 00:35:04.358 "method": "framework_start_init", 00:35:04.358 "id": 1 00:35:04.358 } 00:35:04.358 00:35:04.358 INFO: Requests: 00:35:04.358 { 00:35:04.358 "jsonrpc": "2.0", 00:35:04.358 "method": "framework_start_init", 00:35:04.358 "id": 1 00:35:04.358 } 00:35:04.358 00:35:04.358 [2024-10-14 14:48:44.819265] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:04.359 INFO: response: 00:35:04.359 { 00:35:04.359 "jsonrpc": "2.0", 00:35:04.359 "id": 1, 00:35:04.359 "result": true 00:35:04.359 } 00:35:04.359 00:35:04.359 INFO: response: 00:35:04.359 { 00:35:04.359 "jsonrpc": "2.0", 00:35:04.359 "id": 1, 00:35:04.359 "result": true 00:35:04.359 } 00:35:04.359 00:35:04.359 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.359 14:48:44 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:04.359 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.359 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.359 INFO: Setting log level to 40 00:35:04.359 INFO: Setting log level to 40 00:35:04.359 INFO: Setting log level to 40 00:35:04.359 [2024-10-14 14:48:44.832589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.359 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.359 14:48:44 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:04.359 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:04.359 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.359 14:48:44 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:04.359 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.359 14:48:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.619 Nvme0n1 00:35:04.619 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.619 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:04.619 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.619 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.619 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.619 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:04.619 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.619 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.619 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.619 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:04.619 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.619 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.619 [2024-10-14 14:48:45.221309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:04.619 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.619 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:04.619 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.619 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.619 [ 00:35:04.619 { 00:35:04.619 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:04.619 "subtype": "Discovery", 00:35:04.619 "listen_addresses": [], 00:35:04.619 "allow_any_host": true, 00:35:04.619 "hosts": [] 00:35:04.619 }, 00:35:04.619 { 00:35:04.619 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:04.619 "subtype": "NVMe", 00:35:04.619 "listen_addresses": [ 00:35:04.619 { 00:35:04.619 "trtype": "TCP", 00:35:04.619 "adrfam": "IPv4", 00:35:04.619 "traddr": "10.0.0.2", 00:35:04.619 "trsvcid": "4420" 00:35:04.619 } 00:35:04.619 ], 00:35:04.619 "allow_any_host": true, 00:35:04.619 "hosts": [], 00:35:04.619 "serial_number": "SPDK00000000000001", 00:35:04.619 "model_number": "SPDK bdev Controller", 00:35:04.619 "max_namespaces": 1, 00:35:04.619 "min_cntlid": 1, 00:35:04.619 "max_cntlid": 65519, 00:35:04.619 "namespaces": [ 00:35:04.619 { 00:35:04.619 "nsid": 1, 00:35:04.619 "bdev_name": "Nvme0n1", 00:35:04.619 "name": "Nvme0n1", 00:35:04.619 "nguid": "3634473052605494002538450000002B", 00:35:04.619 "uuid": "36344730-5260-5494-0025-38450000002b" 00:35:04.619 } 00:35:04.619 ] 00:35:04.619 } 00:35:04.619 ] 00:35:04.619 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:04.619 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:04.619 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:04.619 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:04.880 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:35:04.880 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:04.880 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:04.880 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:05.141 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:05.141 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:35:05.141 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:05.141 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:05.141 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.141 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.141 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.141 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:05.141 14:48:45 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:05.141 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:05.141 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:05.141 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:05.141 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:05.141 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:05.141 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:05.141 rmmod nvme_tcp 00:35:05.141 rmmod nvme_fabrics 00:35:05.141 rmmod nvme_keyring 00:35:05.141 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:05.141 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:05.141 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:05.141 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 3684006 ']' 00:35:05.141 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 3684006 00:35:05.141 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3684006 ']' 00:35:05.141 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3684006 00:35:05.141 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:35:05.141 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:05.141 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3684006 00:35:05.141 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:05.141 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:05.141 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3684006' 00:35:05.141 killing process with pid 3684006 00:35:05.141 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3684006 00:35:05.141 14:48:45 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3684006 00:35:05.402 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:05.402 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:05.402 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:05.402 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:05.402 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:05.402 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:35:05.402 14:48:45 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:35:05.402 14:48:46 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:05.402 14:48:46 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:05.402 14:48:46 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.402 14:48:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:05.402 14:48:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.945 14:48:48 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:07.945 00:35:07.945 real 0m13.096s 00:35:07.945 user 0m9.993s 00:35:07.945 sys 0m6.684s 00:35:07.945 14:48:48 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:07.945 14:48:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:07.945 ************************************ 00:35:07.945 END TEST nvmf_identify_passthru 00:35:07.945 ************************************ 00:35:07.945 14:48:48 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:07.945 14:48:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:07.945 14:48:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:07.945 14:48:48 -- common/autotest_common.sh@10 -- # set +x 00:35:07.945 ************************************ 00:35:07.945 START TEST nvmf_dif 00:35:07.945 ************************************ 00:35:07.945 14:48:48 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:07.945 * Looking for test storage... 00:35:07.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:07.945 14:48:48 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:07.945 14:48:48 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:35:07.946 14:48:48 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:07.946 14:48:48 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:07.946 14:48:48 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:07.946 14:48:48 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:07.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.946 --rc genhtml_branch_coverage=1 00:35:07.946 --rc genhtml_function_coverage=1 00:35:07.946 --rc genhtml_legend=1 00:35:07.946 --rc geninfo_all_blocks=1 00:35:07.946 --rc geninfo_unexecuted_blocks=1 00:35:07.946 00:35:07.946 ' 00:35:07.946 14:48:48 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:07.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.946 --rc genhtml_branch_coverage=1 00:35:07.946 --rc genhtml_function_coverage=1 00:35:07.946 --rc genhtml_legend=1 00:35:07.946 --rc geninfo_all_blocks=1 00:35:07.946 --rc geninfo_unexecuted_blocks=1 00:35:07.946 00:35:07.946 ' 00:35:07.946 14:48:48 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:07.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.946 --rc genhtml_branch_coverage=1 00:35:07.946 --rc genhtml_function_coverage=1 00:35:07.946 --rc genhtml_legend=1 00:35:07.946 --rc geninfo_all_blocks=1 00:35:07.946 --rc geninfo_unexecuted_blocks=1 00:35:07.946 00:35:07.946 ' 00:35:07.946 14:48:48 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:07.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.946 --rc genhtml_branch_coverage=1 00:35:07.946 --rc genhtml_function_coverage=1 00:35:07.946 --rc genhtml_legend=1 00:35:07.946 --rc geninfo_all_blocks=1 00:35:07.946 --rc geninfo_unexecuted_blocks=1 00:35:07.946 00:35:07.946 ' 00:35:07.946 14:48:48 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.946 14:48:48 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.946 14:48:48 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.946 14:48:48 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.946 14:48:48 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.946 14:48:48 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:07.946 14:48:48 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:07.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:07.946 14:48:48 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:07.946 14:48:48 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:07.946 14:48:48 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:07.946 14:48:48 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:07.946 14:48:48 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.946 14:48:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:07.946 14:48:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:07.946 14:48:48 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:07.946 14:48:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:16.090 14:48:55 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:16.091 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:16.091 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:16.091 Found net devices under 0000:31:00.0: cvl_0_0 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:16.091 Found net devices under 0000:31:00.1: cvl_0_1 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:16.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:16.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:35:16.091 00:35:16.091 --- 10.0.0.2 ping statistics --- 00:35:16.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.091 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:16.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:16.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:35:16.091 00:35:16.091 --- 10.0.0.1 ping statistics --- 00:35:16.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.091 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:35:16.091 14:48:55 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:18.634 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:18.634 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:18.634 14:48:59 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:18.634 14:48:59 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:18.634 14:48:59 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:18.634 14:48:59 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:18.634 14:48:59 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:18.634 14:48:59 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:18.634 14:48:59 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:18.634 14:48:59 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:18.634 14:48:59 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:18.634 14:48:59 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:18.634 14:48:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:18.634 14:48:59 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=3690211 00:35:18.634 14:48:59 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 3690211 00:35:18.634 14:48:59 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:18.634 14:48:59 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3690211 ']' 00:35:18.634 14:48:59 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.634 14:48:59 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:18.634 14:48:59 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.634 14:48:59 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:18.634 14:48:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:18.895 [2024-10-14 14:48:59.383423] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:35:18.895 [2024-10-14 14:48:59.383484] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:18.895 [2024-10-14 14:48:59.456507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.895 [2024-10-14 14:48:59.498086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:18.895 [2024-10-14 14:48:59.498121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:18.895 [2024-10-14 14:48:59.498129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:18.895 [2024-10-14 14:48:59.498136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:18.895 [2024-10-14 14:48:59.498142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:18.895 [2024-10-14 14:48:59.498761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.465 14:49:00 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:19.465 14:49:00 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:35:19.465 14:49:00 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:19.465 14:49:00 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:19.725 14:49:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:19.725 14:49:00 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.725 14:49:00 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:19.725 14:49:00 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:19.725 14:49:00 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.725 14:49:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:19.725 [2024-10-14 14:49:00.240748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.725 14:49:00 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.725 14:49:00 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:19.725 14:49:00 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:19.725 14:49:00 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:19.725 14:49:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:19.725 ************************************ 00:35:19.725 START TEST fio_dif_1_default 00:35:19.725 ************************************ 00:35:19.725 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:35:19.725 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:19.725 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:19.725 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:19.725 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:19.725 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:19.725 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:19.725 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.725 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:19.725 bdev_null0 00:35:19.725 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.725 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:19.725 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:19.726 [2024-10-14 14:49:00.325099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:19.726 { 00:35:19.726 "params": { 00:35:19.726 "name": "Nvme$subsystem", 00:35:19.726 "trtype": "$TEST_TRANSPORT", 00:35:19.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:19.726 "adrfam": "ipv4", 00:35:19.726 "trsvcid": "$NVMF_PORT", 00:35:19.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:19.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:19.726 "hdgst": ${hdgst:-false}, 00:35:19.726 "ddgst": ${ddgst:-false} 00:35:19.726 }, 00:35:19.726 "method": "bdev_nvme_attach_controller" 00:35:19.726 } 00:35:19.726 EOF 00:35:19.726 )") 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:19.726 "params": { 00:35:19.726 "name": "Nvme0", 00:35:19.726 "trtype": "tcp", 00:35:19.726 "traddr": "10.0.0.2", 00:35:19.726 "adrfam": "ipv4", 00:35:19.726 "trsvcid": "4420", 00:35:19.726 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:19.726 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:19.726 "hdgst": false, 00:35:19.726 "ddgst": false 00:35:19.726 }, 00:35:19.726 "method": "bdev_nvme_attach_controller" 00:35:19.726 }' 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:19.726 14:49:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.322 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:20.322 fio-3.35 00:35:20.322 Starting 1 thread 00:35:32.543 00:35:32.543 filename0: (groupid=0, jobs=1): err= 0: pid=3690738: Mon Oct 14 14:49:11 2024 00:35:32.543 read: IOPS=97, BW=388KiB/s (397kB/s)(3888KiB/10020msec) 00:35:32.543 slat (nsec): min=5657, max=32511, avg=6574.34, stdev=1641.07 00:35:32.543 clat (usec): min=40905, max=45854, avg=41216.55, stdev=572.31 00:35:32.543 lat (usec): min=40911, max=45886, avg=41223.12, stdev=572.63 00:35:32.543 clat percentiles (usec): 00:35:32.543 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:32.543 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:32.543 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:35:32.543 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:35:32.543 | 99.99th=[45876] 00:35:32.543 bw ( KiB/s): min= 384, max= 416, per=99.74%, avg=387.20, stdev= 9.85, samples=20 00:35:32.543 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:35:32.543 lat (msec) : 50=100.00% 00:35:32.543 cpu : usr=93.27%, sys=6.52%, ctx=14, majf=0, minf=240 00:35:32.543 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.543 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.543 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:32.543 00:35:32.543 Run status group 0 (all jobs): 00:35:32.543 READ: bw=388KiB/s (397kB/s), 388KiB/s-388KiB/s (397kB/s-397kB/s), io=3888KiB (3981kB), run=10020-10020msec 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.543 00:35:32.543 real 0m11.125s 00:35:32.543 user 0m27.795s 00:35:32.543 sys 0m0.946s 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:32.543 14:49:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:32.543 ************************************ 00:35:32.543 END TEST fio_dif_1_default 00:35:32.543 ************************************ 00:35:32.544 14:49:11 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:32.544 14:49:11 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:32.544 14:49:11 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:32.544 14:49:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:32.544 ************************************ 00:35:32.544 START TEST fio_dif_1_multi_subsystems 00:35:32.544 ************************************ 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.544 bdev_null0 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.544 [2024-10-14 14:49:11.529931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.544 bdev_null1 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:32.544 { 00:35:32.544 "params": { 00:35:32.544 "name": "Nvme$subsystem", 00:35:32.544 "trtype": "$TEST_TRANSPORT", 00:35:32.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.544 "adrfam": "ipv4", 00:35:32.544 "trsvcid": "$NVMF_PORT", 00:35:32.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.544 "hdgst": ${hdgst:-false}, 00:35:32.544 "ddgst": ${ddgst:-false} 00:35:32.544 }, 00:35:32.544 "method": "bdev_nvme_attach_controller" 00:35:32.544 } 00:35:32.544 EOF 00:35:32.544 )") 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:32.544 { 00:35:32.544 "params": { 00:35:32.544 "name": "Nvme$subsystem", 00:35:32.544 "trtype": "$TEST_TRANSPORT", 00:35:32.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.544 "adrfam": "ipv4", 00:35:32.544 "trsvcid": "$NVMF_PORT", 00:35:32.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.544 "hdgst": ${hdgst:-false}, 00:35:32.544 "ddgst": ${ddgst:-false} 00:35:32.544 }, 00:35:32.544 "method": "bdev_nvme_attach_controller" 00:35:32.544 } 00:35:32.544 EOF 00:35:32.544 )") 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:32.544 "params": { 00:35:32.544 "name": "Nvme0", 00:35:32.544 "trtype": "tcp", 00:35:32.544 "traddr": "10.0.0.2", 00:35:32.544 "adrfam": "ipv4", 00:35:32.544 "trsvcid": "4420", 00:35:32.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:32.544 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:32.544 "hdgst": false, 00:35:32.544 "ddgst": false 00:35:32.544 }, 00:35:32.544 "method": "bdev_nvme_attach_controller" 00:35:32.544 },{ 00:35:32.544 "params": { 00:35:32.544 "name": "Nvme1", 00:35:32.544 "trtype": "tcp", 00:35:32.544 "traddr": "10.0.0.2", 00:35:32.544 "adrfam": "ipv4", 00:35:32.544 "trsvcid": "4420", 00:35:32.544 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:32.544 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:32.544 "hdgst": false, 00:35:32.544 "ddgst": false 00:35:32.544 }, 00:35:32.544 "method": "bdev_nvme_attach_controller" 00:35:32.544 }' 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:32.544 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:32.545 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:32.545 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:32.545 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:32.545 14:49:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.545 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:32.545 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:32.545 fio-3.35 00:35:32.545 Starting 2 threads 00:35:42.546 00:35:42.546 filename0: (groupid=0, jobs=1): err= 0: pid=3692946: Mon Oct 14 14:49:22 2024 00:35:42.546 read: IOPS=189, BW=757KiB/s (775kB/s)(7600KiB/10039msec) 00:35:42.546 slat (nsec): min=5644, max=46540, avg=6442.36, stdev=1491.73 00:35:42.546 clat (usec): min=691, max=42710, avg=21116.80, stdev=20148.01 00:35:42.546 lat (usec): min=700, max=42757, avg=21123.24, stdev=20147.98 00:35:42.546 clat percentiles (usec): 00:35:42.546 | 1.00th=[ 865], 5.00th=[ 889], 10.00th=[ 906], 20.00th=[ 930], 00:35:42.546 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[41157], 60.00th=[41157], 00:35:42.546 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:35:42.546 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:42.546 | 99.99th=[42730] 00:35:42.546 bw ( KiB/s): min= 704, max= 768, per=59.01%, avg=758.40, stdev=21.02, samples=20 00:35:42.546 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:35:42.546 lat (usec) : 750=0.42%, 1000=48.47% 00:35:42.546 lat (msec) : 2=1.00%, 50=50.11% 00:35:42.546 cpu : usr=95.61%, sys=4.11%, ctx=34, majf=0, minf=190 00:35:42.546 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.546 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.546 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:42.546 filename1: (groupid=0, jobs=1): err= 0: pid=3692947: Mon Oct 14 14:49:22 2024 00:35:42.546 read: IOPS=132, BW=528KiB/s (541kB/s)(5296KiB/10025msec) 00:35:42.546 slat (nsec): min=5645, max=28465, avg=6512.13, stdev=1389.99 00:35:42.546 clat (usec): min=567, max=42769, avg=30268.60, stdev=18479.90 00:35:42.546 lat (usec): min=573, max=42775, avg=30275.12, stdev=18480.31 00:35:42.546 clat percentiles (usec): 00:35:42.546 | 1.00th=[ 676], 5.00th=[ 701], 10.00th=[ 709], 20.00th=[ 734], 00:35:42.546 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:35:42.546 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:42.546 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:35:42.546 | 99.99th=[42730] 00:35:42.546 bw ( KiB/s): min= 352, max= 800, per=41.02%, avg=528.00, stdev=142.54, samples=20 00:35:42.546 iops : min= 88, max= 200, avg=132.00, stdev=35.64, samples=20 00:35:42.546 lat (usec) : 750=23.79%, 1000=4.31% 00:35:42.546 lat (msec) : 50=71.90% 00:35:42.546 cpu : usr=95.74%, sys=4.05%, ctx=14, majf=0, minf=83 00:35:42.546 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.546 issued rwts: total=1324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.546 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:42.546 00:35:42.546 Run status group 0 (all jobs): 00:35:42.546 READ: bw=1285KiB/s (1315kB/s), 528KiB/s-757KiB/s (541kB/s-775kB/s), io=12.6MiB (13.2MB), run=10025-10039msec 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.546 00:35:42.546 real 0m11.461s 00:35:42.546 user 0m33.542s 00:35:42.546 sys 0m1.201s 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:42.546 14:49:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.546 ************************************ 00:35:42.546 END TEST fio_dif_1_multi_subsystems 00:35:42.546 ************************************ 00:35:42.546 14:49:22 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:42.546 14:49:22 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:42.546 14:49:22 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:42.546 14:49:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:42.546 ************************************ 00:35:42.546 START TEST fio_dif_rand_params 00:35:42.546 ************************************ 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.546 bdev_null0 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.546 [2024-10-14 14:49:23.076626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.546 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:42.547 { 00:35:42.547 "params": { 00:35:42.547 "name": "Nvme$subsystem", 00:35:42.547 "trtype": "$TEST_TRANSPORT", 00:35:42.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.547 "adrfam": "ipv4", 00:35:42.547 "trsvcid": "$NVMF_PORT", 00:35:42.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.547 "hdgst": ${hdgst:-false}, 00:35:42.547 "ddgst": ${ddgst:-false} 00:35:42.547 }, 00:35:42.547 "method": "bdev_nvme_attach_controller" 00:35:42.547 } 00:35:42.547 EOF 00:35:42.547 )") 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:42.547 "params": { 00:35:42.547 "name": "Nvme0", 00:35:42.547 "trtype": "tcp", 00:35:42.547 "traddr": "10.0.0.2", 00:35:42.547 "adrfam": "ipv4", 00:35:42.547 "trsvcid": "4420", 00:35:42.547 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:42.547 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:42.547 "hdgst": false, 00:35:42.547 "ddgst": false 00:35:42.547 }, 00:35:42.547 "method": "bdev_nvme_attach_controller" 00:35:42.547 }' 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:42.547 14:49:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.807 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:42.807 ... 00:35:42.807 fio-3.35 00:35:42.807 Starting 3 threads 00:35:49.388 00:35:49.388 filename0: (groupid=0, jobs=1): err= 0: pid=3695223: Mon Oct 14 14:49:29 2024 00:35:49.388 read: IOPS=236, BW=29.5MiB/s (30.9MB/s)(149MiB/5045msec) 00:35:49.388 slat (nsec): min=5718, max=64557, avg=7677.04, stdev=2368.15 00:35:49.388 clat (usec): min=4703, max=54829, avg=12663.14, stdev=7900.96 00:35:49.388 lat (usec): min=4711, max=54836, avg=12670.82, stdev=7900.94 00:35:49.388 clat percentiles (usec): 00:35:49.388 | 1.00th=[ 6456], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[ 9503], 00:35:49.388 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:35:49.388 | 70.00th=[12387], 80.00th=[13042], 90.00th=[13960], 95.00th=[15270], 00:35:49.388 | 99.00th=[51643], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:35:49.388 | 99.99th=[54789] 00:35:49.388 bw ( KiB/s): min=24064, max=34816, per=32.97%, avg=30438.40, stdev=3826.61, samples=10 00:35:49.388 iops : min= 188, max= 272, avg=237.80, stdev=29.90, samples=10 00:35:49.388 lat (msec) : 10=27.37%, 20=68.68%, 50=1.60%, 100=2.35% 00:35:49.388 cpu : usr=94.85%, sys=4.92%, ctx=9, majf=0, minf=116 00:35:49.388 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.388 issued rwts: total=1191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.388 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:49.388 filename0: (groupid=0, jobs=1): err= 0: pid=3695224: Mon Oct 14 14:49:29 2024 00:35:49.388 read: IOPS=230, BW=28.8MiB/s (30.2MB/s)(145MiB/5044msec) 00:35:49.388 slat (nsec): min=5692, max=31709, avg=7826.90, stdev=1917.35 00:35:49.388 clat (usec): min=5376, max=57004, avg=12964.03, stdev=7722.71 00:35:49.388 lat (usec): min=5382, max=57010, avg=12971.86, stdev=7722.84 00:35:49.388 clat percentiles (usec): 00:35:49.388 | 1.00th=[ 6718], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[ 9896], 00:35:49.388 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11731], 60.00th=[12125], 00:35:49.388 | 70.00th=[12780], 80.00th=[13435], 90.00th=[14353], 95.00th=[15401], 00:35:49.388 | 99.00th=[51643], 99.50th=[53216], 99.90th=[55837], 99.95th=[56886], 00:35:49.388 | 99.99th=[56886] 00:35:49.389 bw ( KiB/s): min=20992, max=35328, per=32.19%, avg=29721.60, stdev=4522.59, samples=10 00:35:49.389 iops : min= 164, max= 276, avg=232.20, stdev=35.33, samples=10 00:35:49.389 lat (msec) : 10=21.84%, 20=74.38%, 50=1.12%, 100=2.67% 00:35:49.389 cpu : usr=94.21%, sys=5.55%, ctx=9, majf=0, minf=133 00:35:49.389 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.389 issued rwts: total=1163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.389 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:49.389 filename0: (groupid=0, jobs=1): err= 0: pid=3695226: Mon Oct 14 14:49:29 2024 00:35:49.389 read: IOPS=254, BW=31.9MiB/s (33.4MB/s)(161MiB/5048msec) 00:35:49.389 slat (nsec): min=5728, max=36576, avg=6943.77, stdev=1751.46 00:35:49.389 clat (usec): min=5982, max=51421, avg=11721.49, stdev=4344.24 00:35:49.389 lat (usec): min=5991, max=51427, avg=11728.43, stdev=4344.43 00:35:49.389 clat percentiles (usec): 00:35:49.389 | 1.00th=[ 7242], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9503], 00:35:49.389 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11469], 60.00th=[12125], 00:35:49.389 | 70.00th=[12518], 80.00th=[13173], 90.00th=[13829], 95.00th=[14353], 00:35:49.389 | 99.00th=[47449], 99.50th=[48497], 99.90th=[51119], 99.95th=[51643], 00:35:49.389 | 99.99th=[51643] 00:35:49.389 bw ( KiB/s): min=29952, max=36352, per=35.63%, avg=32896.00, stdev=2013.03, samples=10 00:35:49.389 iops : min= 234, max= 284, avg=257.00, stdev=15.73, samples=10 00:35:49.389 lat (msec) : 10=28.44%, 20=70.47%, 50=0.85%, 100=0.23% 00:35:49.389 cpu : usr=93.50%, sys=6.26%, ctx=14, majf=0, minf=100 00:35:49.389 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.389 issued rwts: total=1287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.389 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:49.389 00:35:49.389 Run status group 0 (all jobs): 00:35:49.389 READ: bw=90.2MiB/s (94.5MB/s), 28.8MiB/s-31.9MiB/s (30.2MB/s-33.4MB/s), io=455MiB (477MB), run=5044-5048msec 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 bdev_null0 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 [2024-10-14 14:49:29.291756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 bdev_null1 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 bdev_null2 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:49.389 { 00:35:49.389 "params": { 00:35:49.389 "name": "Nvme$subsystem", 00:35:49.389 "trtype": "$TEST_TRANSPORT", 00:35:49.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:49.389 "adrfam": "ipv4", 00:35:49.389 "trsvcid": "$NVMF_PORT", 00:35:49.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:49.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:49.389 "hdgst": ${hdgst:-false}, 00:35:49.389 "ddgst": ${ddgst:-false} 00:35:49.389 }, 00:35:49.389 "method": "bdev_nvme_attach_controller" 00:35:49.389 } 00:35:49.389 EOF 00:35:49.389 )") 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:49.389 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:49.390 { 00:35:49.390 "params": { 00:35:49.390 "name": "Nvme$subsystem", 00:35:49.390 "trtype": "$TEST_TRANSPORT", 00:35:49.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:49.390 "adrfam": "ipv4", 00:35:49.390 "trsvcid": "$NVMF_PORT", 00:35:49.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:49.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:49.390 "hdgst": ${hdgst:-false}, 00:35:49.390 "ddgst": ${ddgst:-false} 00:35:49.390 }, 00:35:49.390 "method": "bdev_nvme_attach_controller" 00:35:49.390 } 00:35:49.390 EOF 00:35:49.390 )") 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:49.390 { 00:35:49.390 "params": { 00:35:49.390 "name": "Nvme$subsystem", 00:35:49.390 "trtype": "$TEST_TRANSPORT", 00:35:49.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:49.390 "adrfam": "ipv4", 00:35:49.390 "trsvcid": "$NVMF_PORT", 00:35:49.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:49.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:49.390 "hdgst": ${hdgst:-false}, 00:35:49.390 "ddgst": ${ddgst:-false} 00:35:49.390 }, 00:35:49.390 "method": "bdev_nvme_attach_controller" 00:35:49.390 } 00:35:49.390 EOF 00:35:49.390 )") 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:49.390 "params": { 00:35:49.390 "name": "Nvme0", 00:35:49.390 "trtype": "tcp", 00:35:49.390 "traddr": "10.0.0.2", 00:35:49.390 "adrfam": "ipv4", 00:35:49.390 "trsvcid": "4420", 00:35:49.390 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.390 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.390 "hdgst": false, 00:35:49.390 "ddgst": false 00:35:49.390 }, 00:35:49.390 "method": "bdev_nvme_attach_controller" 00:35:49.390 },{ 00:35:49.390 "params": { 00:35:49.390 "name": "Nvme1", 00:35:49.390 "trtype": "tcp", 00:35:49.390 "traddr": "10.0.0.2", 00:35:49.390 "adrfam": "ipv4", 00:35:49.390 "trsvcid": "4420", 00:35:49.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:49.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:49.390 "hdgst": false, 00:35:49.390 "ddgst": false 00:35:49.390 }, 00:35:49.390 "method": "bdev_nvme_attach_controller" 00:35:49.390 },{ 00:35:49.390 "params": { 00:35:49.390 "name": "Nvme2", 00:35:49.390 "trtype": "tcp", 00:35:49.390 "traddr": "10.0.0.2", 00:35:49.390 "adrfam": "ipv4", 00:35:49.390 "trsvcid": "4420", 00:35:49.390 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:49.390 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:49.390 "hdgst": false, 00:35:49.390 "ddgst": false 00:35:49.390 }, 00:35:49.390 "method": "bdev_nvme_attach_controller" 00:35:49.390 }' 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:49.390 14:49:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.390 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:49.390 ... 00:35:49.390 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:49.390 ... 00:35:49.390 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:49.390 ... 00:35:49.390 fio-3.35 00:35:49.390 Starting 24 threads 00:36:01.600 00:36:01.600 filename0: (groupid=0, jobs=1): err= 0: pid=3696648: Mon Oct 14 14:49:40 2024 00:36:01.600 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.9MiB/10040msec) 00:36:01.600 slat (usec): min=4, max=116, avg=23.93, stdev=19.69 00:36:01.600 clat (usec): min=23818, max=73359, avg=33030.58, stdev=3168.90 00:36:01.600 lat (usec): min=23824, max=73367, avg=33054.51, stdev=3166.23 00:36:01.600 clat percentiles (usec): 00:36:01.600 | 1.00th=[29754], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:36:01.600 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:36:01.600 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:01.600 | 99.00th=[41681], 99.50th=[60556], 99.90th=[72877], 99.95th=[72877], 00:36:01.600 | 99.99th=[72877] 00:36:01.600 bw ( KiB/s): min= 1663, max= 2048, per=4.11%, avg=1924.95, stdev=104.94, samples=20 00:36:01.600 iops : min= 415, max= 512, avg=481.20, stdev=26.33, samples=20 00:36:01.600 lat (msec) : 50=99.34%, 100=0.66% 00:36:01.600 cpu : usr=99.14%, sys=0.54%, ctx=13, majf=0, minf=39 00:36:01.600 IO depths : 1=5.3%, 2=11.4%, 4=24.8%, 8=51.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:01.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.600 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.600 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.600 filename0: (groupid=0, jobs=1): err= 0: pid=3696649: Mon Oct 14 14:49:40 2024 00:36:01.600 read: IOPS=505, BW=2023KiB/s (2072kB/s)(19.9MiB/10075msec) 00:36:01.600 slat (usec): min=5, max=129, avg=17.24, stdev=15.08 00:36:01.600 clat (usec): min=7021, max=89751, avg=31483.30, stdev=5745.77 00:36:01.600 lat (usec): min=7029, max=89757, avg=31500.54, stdev=5747.41 00:36:01.600 clat percentiles (usec): 00:36:01.600 | 1.00th=[ 9503], 5.00th=[19268], 10.00th=[27395], 20.00th=[31851], 00:36:01.600 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:01.600 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:36:01.600 | 99.00th=[36439], 99.50th=[48497], 99.90th=[86508], 99.95th=[86508], 00:36:01.600 | 99.99th=[89654] 00:36:01.600 bw ( KiB/s): min= 1792, max= 2634, per=4.34%, avg=2030.65, stdev=194.21, samples=20 00:36:01.600 iops : min= 448, max= 658, avg=507.60, stdev=48.47, samples=20 00:36:01.600 lat (msec) : 10=1.39%, 20=4.95%, 50=93.35%, 100=0.31% 00:36:01.600 cpu : usr=99.13%, sys=0.57%, ctx=13, majf=0, minf=80 00:36:01.600 IO depths : 1=5.3%, 2=10.7%, 4=22.1%, 8=54.6%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:01.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.600 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.600 issued rwts: total=5096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.600 filename0: (groupid=0, jobs=1): err= 0: pid=3696650: Mon Oct 14 14:49:40 2024 00:36:01.600 read: IOPS=493, BW=1974KiB/s (2021kB/s)(19.4MiB/10056msec) 00:36:01.600 slat (usec): min=5, max=125, avg=16.49, stdev=16.56 00:36:01.600 clat (usec): min=16245, max=86848, avg=32300.88, stdev=4736.23 00:36:01.600 lat (usec): min=16251, max=86877, avg=32317.37, stdev=4737.59 00:36:01.600 clat percentiles (usec): 00:36:01.600 | 1.00th=[20055], 5.00th=[23200], 10.00th=[28967], 20.00th=[32113], 00:36:01.600 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:36:01.600 | 70.00th=[33162], 80.00th=[33424], 90.00th=[34341], 95.00th=[35390], 00:36:01.600 | 99.00th=[42206], 99.50th=[53216], 99.90th=[86508], 99.95th=[86508], 00:36:01.600 | 99.99th=[86508] 00:36:01.600 bw ( KiB/s): min= 1788, max= 2512, per=4.23%, avg=1976.95, stdev=141.13, samples=20 00:36:01.600 iops : min= 447, max= 628, avg=494.20, stdev=35.27, samples=20 00:36:01.600 lat (msec) : 20=0.77%, 50=98.67%, 100=0.56% 00:36:01.600 cpu : usr=98.73%, sys=0.94%, ctx=22, majf=0, minf=47 00:36:01.600 IO depths : 1=4.7%, 2=9.4%, 4=20.2%, 8=57.8%, 16=7.9%, 32=0.0%, >=64=0.0% 00:36:01.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.600 complete : 0=0.0%, 4=92.8%, 8=1.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.600 issued rwts: total=4962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.600 filename0: (groupid=0, jobs=1): err= 0: pid=3696651: Mon Oct 14 14:49:40 2024 00:36:01.600 read: IOPS=480, BW=1921KiB/s (1967kB/s)(18.8MiB/10029msec) 00:36:01.600 slat (usec): min=4, max=130, avg=37.25, stdev=18.72 00:36:01.600 clat (usec): min=19704, max=87627, avg=32991.24, stdev=3606.75 00:36:01.600 lat (usec): min=19710, max=87658, avg=33028.48, stdev=3605.34 00:36:01.600 clat percentiles (usec): 00:36:01.600 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:36:01.600 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:01.600 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:36:01.600 | 99.00th=[35914], 99.50th=[58983], 99.90th=[87557], 99.95th=[87557], 00:36:01.600 | 99.99th=[87557] 00:36:01.600 bw ( KiB/s): min= 1654, max= 2048, per=4.10%, avg=1918.45, stdev=84.31, samples=20 00:36:01.600 iops : min= 413, max= 512, avg=479.55, stdev=21.10, samples=20 00:36:01.600 lat (msec) : 20=0.04%, 50=99.25%, 100=0.71% 00:36:01.600 cpu : usr=98.96%, sys=0.72%, ctx=16, majf=0, minf=35 00:36:01.600 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:01.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.600 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.600 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.600 filename0: (groupid=0, jobs=1): err= 0: pid=3696652: Mon Oct 14 14:49:40 2024 00:36:01.600 read: IOPS=485, BW=1941KiB/s (1987kB/s)(19.0MiB/10050msec) 00:36:01.600 slat (usec): min=5, max=101, avg=22.61, stdev=17.28 00:36:01.600 clat (usec): min=17116, max=87461, avg=32795.35, stdev=3827.91 00:36:01.600 lat (usec): min=17122, max=87504, avg=32817.97, stdev=3828.97 00:36:01.600 clat percentiles (usec): 00:36:01.600 | 1.00th=[18482], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:01.600 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:36:01.600 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:01.600 | 99.00th=[35914], 99.50th=[44827], 99.90th=[87557], 99.95th=[87557], 00:36:01.600 | 99.99th=[87557] 00:36:01.600 bw ( KiB/s): min= 1792, max= 2139, per=4.16%, avg=1943.85, stdev=72.42, samples=20 00:36:01.600 iops : min= 448, max= 534, avg=485.70, stdev=17.94, samples=20 00:36:01.600 lat (msec) : 20=1.19%, 50=98.48%, 100=0.33% 00:36:01.600 cpu : usr=99.12%, sys=0.55%, ctx=13, majf=0, minf=40 00:36:01.600 IO depths : 1=5.8%, 2=11.8%, 4=24.3%, 8=51.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:01.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.600 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.600 issued rwts: total=4876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.600 filename0: (groupid=0, jobs=1): err= 0: pid=3696653: Mon Oct 14 14:49:40 2024 00:36:01.600 read: IOPS=529, BW=2119KiB/s (2170kB/s)(20.9MiB/10089msec) 00:36:01.600 slat (usec): min=5, max=176, avg=16.66, stdev=19.31 00:36:01.600 clat (usec): min=1792, max=99637, avg=30003.12, stdev=7460.77 00:36:01.600 lat (usec): min=1810, max=99644, avg=30019.78, stdev=7463.45 00:36:01.600 clat percentiles (msec): 00:36:01.600 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 21], 20.00th=[ 27], 00:36:01.600 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:01.600 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 35], 00:36:01.600 | 99.00th=[ 44], 99.50th=[ 51], 99.90th=[ 91], 99.95th=[ 101], 00:36:01.600 | 99.99th=[ 101] 00:36:01.600 bw ( KiB/s): min= 1920, max= 3216, per=4.56%, avg=2130.25, stdev=302.03, samples=20 00:36:01.600 iops : min= 480, max= 804, avg=532.45, stdev=75.51, samples=20 00:36:01.600 lat (msec) : 2=0.17%, 4=0.88%, 10=2.43%, 20=4.77%, 50=91.11% 00:36:01.600 lat (msec) : 100=0.64% 00:36:01.600 cpu : usr=99.08%, sys=0.61%, ctx=12, majf=0, minf=63 00:36:01.600 IO depths : 1=3.1%, 2=6.5%, 4=16.1%, 8=64.7%, 16=9.6%, 32=0.0%, >=64=0.0% 00:36:01.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.600 complete : 0=0.0%, 4=91.7%, 8=2.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.600 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.600 filename0: (groupid=0, jobs=1): err= 0: pid=3696654: Mon Oct 14 14:49:40 2024 00:36:01.600 read: IOPS=491, BW=1968KiB/s (2015kB/s)(19.3MiB/10054msec) 00:36:01.600 slat (usec): min=3, max=123, avg=23.99, stdev=19.16 00:36:01.600 clat (usec): min=13092, max=90351, avg=32330.47, stdev=5041.50 00:36:01.600 lat (usec): min=13100, max=90357, avg=32354.46, stdev=5043.90 00:36:01.600 clat percentiles (usec): 00:36:01.600 | 1.00th=[19268], 5.00th=[23200], 10.00th=[27395], 20.00th=[32113], 00:36:01.600 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:36:01.600 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[35390], 00:36:01.600 | 99.00th=[49546], 99.50th=[54264], 99.90th=[82314], 99.95th=[90702], 00:36:01.600 | 99.99th=[90702] 00:36:01.600 bw ( KiB/s): min= 1872, max= 2192, per=4.21%, avg=1970.45, stdev=86.85, samples=20 00:36:01.600 iops : min= 468, max= 548, avg=492.50, stdev=21.59, samples=20 00:36:01.600 lat (msec) : 20=1.48%, 50=97.57%, 100=0.95% 00:36:01.600 cpu : usr=98.65%, sys=1.01%, ctx=16, majf=0, minf=40 00:36:01.600 IO depths : 1=3.7%, 2=7.6%, 4=17.9%, 8=61.5%, 16=9.2%, 32=0.0%, >=64=0.0% 00:36:01.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.600 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.600 issued rwts: total=4946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.600 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.600 filename0: (groupid=0, jobs=1): err= 0: pid=3696655: Mon Oct 14 14:49:40 2024 00:36:01.600 read: IOPS=481, BW=1924KiB/s (1970kB/s)(18.9MiB/10037msec) 00:36:01.600 slat (usec): min=5, max=111, avg=31.31, stdev=18.50 00:36:01.600 clat (usec): min=17881, max=87799, avg=33000.48, stdev=3973.12 00:36:01.600 lat (usec): min=17891, max=87815, avg=33031.79, stdev=3973.31 00:36:01.600 clat percentiles (usec): 00:36:01.600 | 1.00th=[24249], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:36:01.600 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:36:01.600 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34866], 00:36:01.600 | 99.00th=[45351], 99.50th=[52167], 99.90th=[87557], 99.95th=[87557], 00:36:01.600 | 99.99th=[87557] 00:36:01.600 bw ( KiB/s): min= 1631, max= 2048, per=4.11%, avg=1921.70, stdev=89.08, samples=20 00:36:01.600 iops : min= 407, max= 512, avg=480.35, stdev=22.35, samples=20 00:36:01.600 lat (msec) : 20=0.17%, 50=99.11%, 100=0.72% 00:36:01.600 cpu : usr=98.92%, sys=0.76%, ctx=13, majf=0, minf=66 00:36:01.600 IO depths : 1=4.5%, 2=10.5%, 4=24.2%, 8=52.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:36:01.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 issued rwts: total=4828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.601 filename1: (groupid=0, jobs=1): err= 0: pid=3696656: Mon Oct 14 14:49:40 2024 00:36:01.601 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.8MiB/10025msec) 00:36:01.601 slat (usec): min=5, max=110, avg=24.95, stdev=17.48 00:36:01.601 clat (usec): min=20523, max=87717, avg=33092.33, stdev=3556.24 00:36:01.601 lat (usec): min=20529, max=87728, avg=33117.28, stdev=3555.89 00:36:01.601 clat percentiles (usec): 00:36:01.601 | 1.00th=[29492], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:36:01.601 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:36:01.601 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34866], 00:36:01.601 | 99.00th=[39060], 99.50th=[54789], 99.90th=[87557], 99.95th=[87557], 00:36:01.601 | 99.99th=[87557] 00:36:01.601 bw ( KiB/s): min= 1664, max= 2048, per=4.11%, avg=1921.75, stdev=91.41, samples=20 00:36:01.601 iops : min= 416, max= 512, avg=480.40, stdev=22.91, samples=20 00:36:01.601 lat (msec) : 50=99.34%, 100=0.66% 00:36:01.601 cpu : usr=98.79%, sys=0.88%, ctx=14, majf=0, minf=62 00:36:01.601 IO depths : 1=0.8%, 2=6.9%, 4=24.7%, 8=55.8%, 16=11.7%, 32=0.0%, >=64=0.0% 00:36:01.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.601 filename1: (groupid=0, jobs=1): err= 0: pid=3696657: Mon Oct 14 14:49:40 2024 00:36:01.601 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10034msec) 00:36:01.601 slat (usec): min=5, max=106, avg=25.92, stdev=17.05 00:36:01.601 clat (usec): min=19286, max=87559, avg=32392.41, stdev=4649.14 00:36:01.601 lat (usec): min=19297, max=87590, avg=32418.33, stdev=4651.57 00:36:01.601 clat percentiles (usec): 00:36:01.601 | 1.00th=[21365], 5.00th=[24511], 10.00th=[28443], 20.00th=[32113], 00:36:01.601 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:01.601 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[35390], 00:36:01.601 | 99.00th=[45351], 99.50th=[51643], 99.90th=[87557], 99.95th=[87557], 00:36:01.601 | 99.99th=[87557] 00:36:01.601 bw ( KiB/s): min= 1641, max= 2288, per=4.19%, avg=1961.00, stdev=124.48, samples=20 00:36:01.601 iops : min= 410, max= 572, avg=490.20, stdev=31.13, samples=20 00:36:01.601 lat (msec) : 20=0.32%, 50=98.86%, 100=0.81% 00:36:01.601 cpu : usr=98.93%, sys=0.74%, ctx=17, majf=0, minf=35 00:36:01.601 IO depths : 1=4.6%, 2=9.6%, 4=21.0%, 8=56.5%, 16=8.3%, 32=0.0%, >=64=0.0% 00:36:01.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 complete : 0=0.0%, 4=93.0%, 8=1.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.601 filename1: (groupid=0, jobs=1): err= 0: pid=3696658: Mon Oct 14 14:49:40 2024 00:36:01.601 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.9MiB/10050msec) 00:36:01.601 slat (usec): min=5, max=105, avg=27.36, stdev=18.24 00:36:01.601 clat (msec): min=17, max=104, avg=32.98, stdev= 4.82 00:36:01.601 lat (msec): min=17, max=104, avg=33.01, stdev= 4.82 00:36:01.601 clat percentiles (msec): 00:36:01.601 | 1.00th=[ 21], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:36:01.601 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:01.601 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 36], 00:36:01.601 | 99.00th=[ 52], 99.50th=[ 58], 99.90th=[ 105], 99.95th=[ 105], 00:36:01.601 | 99.99th=[ 105] 00:36:01.601 bw ( KiB/s): min= 1728, max= 2112, per=4.13%, avg=1930.55, stdev=84.32, samples=20 00:36:01.601 iops : min= 432, max= 528, avg=482.60, stdev=21.15, samples=20 00:36:01.601 lat (msec) : 20=0.79%, 50=97.85%, 100=1.24%, 250=0.12% 00:36:01.601 cpu : usr=98.91%, sys=0.77%, ctx=13, majf=0, minf=53 00:36:01.601 IO depths : 1=0.1%, 2=5.0%, 4=20.8%, 8=61.1%, 16=13.1%, 32=0.0%, >=64=0.0% 00:36:01.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 complete : 0=0.0%, 4=93.4%, 8=1.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 issued rwts: total=4834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.601 filename1: (groupid=0, jobs=1): err= 0: pid=3696659: Mon Oct 14 14:49:40 2024 00:36:01.601 read: IOPS=482, BW=1928KiB/s (1974kB/s)(18.9MiB/10057msec) 00:36:01.601 slat (usec): min=4, max=101, avg=21.79, stdev=14.63 00:36:01.601 clat (usec): min=16575, max=89261, avg=33008.56, stdev=3428.79 00:36:01.601 lat (usec): min=16590, max=89284, avg=33030.35, stdev=3428.98 00:36:01.601 clat percentiles (usec): 00:36:01.601 | 1.00th=[31065], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:01.601 | 30.00th=[32375], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:36:01.601 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:01.601 | 99.00th=[35914], 99.50th=[47449], 99.90th=[86508], 99.95th=[86508], 00:36:01.601 | 99.99th=[89654] 00:36:01.601 bw ( KiB/s): min= 1664, max= 2048, per=4.13%, avg=1931.35, stdev=81.87, samples=20 00:36:01.601 iops : min= 416, max= 512, avg=482.80, stdev=20.41, samples=20 00:36:01.601 lat (msec) : 20=0.19%, 50=99.48%, 100=0.33% 00:36:01.601 cpu : usr=98.65%, sys=1.02%, ctx=16, majf=0, minf=36 00:36:01.601 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:01.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.601 filename1: (groupid=0, jobs=1): err= 0: pid=3696660: Mon Oct 14 14:49:40 2024 00:36:01.601 read: IOPS=505, BW=2020KiB/s (2069kB/s)(19.7MiB/10001msec) 00:36:01.601 slat (usec): min=5, max=121, avg=16.31, stdev=17.11 00:36:01.601 clat (usec): min=6214, max=60526, avg=31557.49, stdev=4781.78 00:36:01.601 lat (usec): min=6225, max=60532, avg=31573.80, stdev=4783.55 00:36:01.601 clat percentiles (usec): 00:36:01.601 | 1.00th=[ 9372], 5.00th=[22152], 10.00th=[26346], 20.00th=[32113], 00:36:01.601 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:01.601 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34866], 00:36:01.601 | 99.00th=[39060], 99.50th=[42206], 99.90th=[60556], 99.95th=[60556], 00:36:01.601 | 99.99th=[60556] 00:36:01.601 bw ( KiB/s): min= 1916, max= 2240, per=4.27%, avg=1997.37, stdev=101.18, samples=19 00:36:01.601 iops : min= 479, max= 560, avg=499.26, stdev=25.25, samples=19 00:36:01.601 lat (msec) : 10=1.25%, 20=2.14%, 50=96.44%, 100=0.18% 00:36:01.601 cpu : usr=99.00%, sys=0.56%, ctx=87, majf=0, minf=73 00:36:01.601 IO depths : 1=4.7%, 2=9.5%, 4=20.4%, 8=57.5%, 16=7.9%, 32=0.0%, >=64=0.0% 00:36:01.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 complete : 0=0.0%, 4=92.8%, 8=1.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 issued rwts: total=5051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.601 filename1: (groupid=0, jobs=1): err= 0: pid=3696661: Mon Oct 14 14:49:40 2024 00:36:01.601 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10048msec) 00:36:01.601 slat (usec): min=5, max=131, avg=26.89, stdev=22.23 00:36:01.601 clat (usec): min=17534, max=86848, avg=32921.77, stdev=3529.54 00:36:01.601 lat (usec): min=17540, max=86856, avg=32948.66, stdev=3529.35 00:36:01.601 clat percentiles (usec): 00:36:01.601 | 1.00th=[27657], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:36:01.601 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:36:01.601 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:36:01.601 | 99.00th=[40633], 99.50th=[42730], 99.90th=[86508], 99.95th=[86508], 00:36:01.601 | 99.99th=[86508] 00:36:01.601 bw ( KiB/s): min= 1664, max= 2052, per=4.13%, avg=1932.95, stdev=81.94, samples=20 00:36:01.601 iops : min= 416, max= 513, avg=483.05, stdev=20.45, samples=20 00:36:01.601 lat (msec) : 20=0.58%, 50=99.09%, 100=0.33% 00:36:01.601 cpu : usr=98.92%, sys=0.68%, ctx=95, majf=0, minf=51 00:36:01.601 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:01.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.601 filename1: (groupid=0, jobs=1): err= 0: pid=3696662: Mon Oct 14 14:49:40 2024 00:36:01.601 read: IOPS=480, BW=1921KiB/s (1968kB/s)(18.8MiB/10026msec) 00:36:01.601 slat (usec): min=5, max=177, avg=39.32, stdev=25.32 00:36:01.601 clat (usec): min=23956, max=87697, avg=32894.01, stdev=3551.14 00:36:01.601 lat (usec): min=23965, max=87739, avg=32933.33, stdev=3550.34 00:36:01.601 clat percentiles (usec): 00:36:01.601 | 1.00th=[31589], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:36:01.601 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:01.601 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:36:01.601 | 99.00th=[35390], 99.50th=[55313], 99.90th=[87557], 99.95th=[87557], 00:36:01.601 | 99.99th=[87557] 00:36:01.601 bw ( KiB/s): min= 1664, max= 2048, per=4.10%, avg=1919.15, stdev=92.51, samples=20 00:36:01.601 iops : min= 416, max= 512, avg=479.75, stdev=23.08, samples=20 00:36:01.601 lat (msec) : 50=99.34%, 100=0.66% 00:36:01.601 cpu : usr=99.27%, sys=0.37%, ctx=92, majf=0, minf=50 00:36:01.601 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:01.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.601 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.601 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.601 filename1: (groupid=0, jobs=1): err= 0: pid=3696663: Mon Oct 14 14:49:40 2024 00:36:01.601 read: IOPS=481, BW=1927KiB/s (1973kB/s)(18.9MiB/10060msec) 00:36:01.601 slat (usec): min=4, max=159, avg=27.24, stdev=22.80 00:36:01.601 clat (msec): min=10, max=110, avg=32.95, stdev= 4.87 00:36:01.601 lat (msec): min=10, max=110, avg=32.97, stdev= 4.87 00:36:01.601 clat percentiles (msec): 00:36:01.601 | 1.00th=[ 21], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 33], 00:36:01.601 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:36:01.601 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 37], 00:36:01.601 | 99.00th=[ 54], 99.50th=[ 61], 99.90th=[ 89], 99.95th=[ 89], 00:36:01.601 | 99.99th=[ 111] 00:36:01.601 bw ( KiB/s): min= 1776, max= 2052, per=4.13%, avg=1932.10, stdev=65.71, samples=20 00:36:01.601 iops : min= 444, max= 513, avg=482.95, stdev=16.44, samples=20 00:36:01.601 lat (msec) : 20=0.72%, 50=98.12%, 100=1.11%, 250=0.04% 00:36:01.601 cpu : usr=98.65%, sys=0.89%, ctx=138, majf=0, minf=33 00:36:01.601 IO depths : 1=5.0%, 2=10.4%, 4=22.4%, 8=54.7%, 16=7.6%, 32=0.0%, >=64=0.0% 00:36:01.601 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 issued rwts: total=4846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.602 filename2: (groupid=0, jobs=1): err= 0: pid=3696664: Mon Oct 14 14:49:40 2024 00:36:01.602 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.6MiB/10072msec) 00:36:01.602 slat (usec): min=5, max=156, avg=28.87, stdev=26.40 00:36:01.602 clat (usec): min=6100, max=77574, avg=31785.29, stdev=5059.29 00:36:01.602 lat (usec): min=6110, max=77581, avg=31814.16, stdev=5062.52 00:36:01.602 clat percentiles (usec): 00:36:01.602 | 1.00th=[10814], 5.00th=[21627], 10.00th=[27919], 20.00th=[31851], 00:36:01.602 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:01.602 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35390], 00:36:01.602 | 99.00th=[44827], 99.50th=[55837], 99.90th=[77071], 99.95th=[77071], 00:36:01.602 | 99.99th=[77071] 00:36:01.602 bw ( KiB/s): min= 1840, max= 2176, per=4.28%, avg=2001.05, stdev=99.29, samples=20 00:36:01.602 iops : min= 460, max= 544, avg=500.15, stdev=24.72, samples=20 00:36:01.602 lat (msec) : 10=0.86%, 20=2.07%, 50=96.46%, 100=0.62% 00:36:01.602 cpu : usr=99.03%, sys=0.66%, ctx=27, majf=0, minf=44 00:36:01.602 IO depths : 1=4.5%, 2=9.0%, 4=19.5%, 8=58.6%, 16=8.4%, 32=0.0%, >=64=0.0% 00:36:01.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 complete : 0=0.0%, 4=92.7%, 8=1.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 issued rwts: total=5022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.602 filename2: (groupid=0, jobs=1): err= 0: pid=3696665: Mon Oct 14 14:49:40 2024 00:36:01.602 read: IOPS=496, BW=1985KiB/s (2033kB/s)(19.5MiB/10054msec) 00:36:01.602 slat (usec): min=4, max=151, avg=25.62, stdev=23.10 00:36:01.602 clat (usec): min=13559, max=87221, avg=32023.22, stdev=4824.62 00:36:01.602 lat (usec): min=13566, max=87227, avg=32048.84, stdev=4827.72 00:36:01.602 clat percentiles (usec): 00:36:01.602 | 1.00th=[20579], 5.00th=[22676], 10.00th=[25822], 20.00th=[31851], 00:36:01.602 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:01.602 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[35390], 00:36:01.602 | 99.00th=[48497], 99.50th=[52691], 99.90th=[87557], 99.95th=[87557], 00:36:01.602 | 99.99th=[87557] 00:36:01.602 bw ( KiB/s): min= 1836, max= 2432, per=4.25%, avg=1988.15, stdev=135.44, samples=20 00:36:01.602 iops : min= 459, max= 608, avg=497.00, stdev=33.84, samples=20 00:36:01.602 lat (msec) : 20=0.52%, 50=98.78%, 100=0.70% 00:36:01.602 cpu : usr=98.88%, sys=0.72%, ctx=147, majf=0, minf=40 00:36:01.602 IO depths : 1=4.3%, 2=9.1%, 4=20.1%, 8=57.9%, 16=8.5%, 32=0.0%, >=64=0.0% 00:36:01.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 complete : 0=0.0%, 4=92.9%, 8=1.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 issued rwts: total=4990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.602 filename2: (groupid=0, jobs=1): err= 0: pid=3696666: Mon Oct 14 14:49:40 2024 00:36:01.602 read: IOPS=493, BW=1974KiB/s (2022kB/s)(19.4MiB/10064msec) 00:36:01.602 slat (usec): min=5, max=115, avg=12.42, stdev=10.51 00:36:01.602 clat (usec): min=7557, max=73193, avg=32317.15, stdev=4195.79 00:36:01.602 lat (usec): min=7564, max=73204, avg=32329.57, stdev=4195.86 00:36:01.602 clat percentiles (usec): 00:36:01.602 | 1.00th=[16909], 5.00th=[25822], 10.00th=[31065], 20.00th=[32113], 00:36:01.602 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:36:01.602 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34866], 00:36:01.602 | 99.00th=[39060], 99.50th=[45351], 99.90th=[72877], 99.95th=[72877], 00:36:01.602 | 99.99th=[72877] 00:36:01.602 bw ( KiB/s): min= 1788, max= 2112, per=4.23%, avg=1979.40, stdev=82.45, samples=20 00:36:01.602 iops : min= 447, max= 528, avg=494.85, stdev=20.61, samples=20 00:36:01.602 lat (msec) : 10=0.70%, 20=1.51%, 50=97.38%, 100=0.40% 00:36:01.602 cpu : usr=99.03%, sys=0.68%, ctx=30, majf=0, minf=61 00:36:01.602 IO depths : 1=4.8%, 2=9.8%, 4=21.2%, 8=56.4%, 16=7.8%, 32=0.0%, >=64=0.0% 00:36:01.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 issued rwts: total=4967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.602 filename2: (groupid=0, jobs=1): err= 0: pid=3696667: Mon Oct 14 14:49:40 2024 00:36:01.602 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.2MiB/10047msec) 00:36:01.602 slat (usec): min=5, max=172, avg=28.78, stdev=22.84 00:36:01.602 clat (usec): min=12097, max=87902, avg=32435.77, stdev=5091.44 00:36:01.602 lat (usec): min=12105, max=87911, avg=32464.56, stdev=5093.62 00:36:01.602 clat percentiles (usec): 00:36:01.602 | 1.00th=[19268], 5.00th=[23987], 10.00th=[29754], 20.00th=[32113], 00:36:01.602 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:01.602 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[35390], 00:36:01.602 | 99.00th=[52167], 99.50th=[56886], 99.90th=[87557], 99.95th=[87557], 00:36:01.602 | 99.99th=[87557] 00:36:01.602 bw ( KiB/s): min= 1664, max= 2192, per=4.19%, avg=1958.40, stdev=109.41, samples=20 00:36:01.602 iops : min= 416, max= 548, avg=489.45, stdev=27.18, samples=20 00:36:01.602 lat (msec) : 20=1.63%, 50=97.15%, 100=1.22% 00:36:01.602 cpu : usr=99.03%, sys=0.64%, ctx=58, majf=0, minf=44 00:36:01.602 IO depths : 1=4.2%, 2=9.1%, 4=20.9%, 8=57.4%, 16=8.4%, 32=0.0%, >=64=0.0% 00:36:01.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 complete : 0=0.0%, 4=93.0%, 8=1.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 issued rwts: total=4916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.602 filename2: (groupid=0, jobs=1): err= 0: pid=3696668: Mon Oct 14 14:49:40 2024 00:36:01.602 read: IOPS=483, BW=1934KiB/s (1980kB/s)(18.9MiB/10032msec) 00:36:01.602 slat (usec): min=5, max=149, avg=29.88, stdev=20.84 00:36:01.602 clat (usec): min=19192, max=86979, avg=32804.93, stdev=4039.86 00:36:01.602 lat (usec): min=19202, max=86989, avg=32834.81, stdev=4040.55 00:36:01.602 clat percentiles (usec): 00:36:01.602 | 1.00th=[22152], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:36:01.602 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:01.602 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:36:01.602 | 99.00th=[47449], 99.50th=[62129], 99.90th=[86508], 99.95th=[86508], 00:36:01.602 | 99.99th=[86508] 00:36:01.602 bw ( KiB/s): min= 1667, max= 2048, per=4.13%, avg=1932.90, stdev=90.31, samples=20 00:36:01.602 iops : min= 416, max= 512, avg=483.15, stdev=22.65, samples=20 00:36:01.602 lat (msec) : 20=0.16%, 50=99.09%, 100=0.74% 00:36:01.602 cpu : usr=98.88%, sys=0.76%, ctx=63, majf=0, minf=46 00:36:01.602 IO depths : 1=5.2%, 2=11.2%, 4=24.1%, 8=52.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:36:01.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 issued rwts: total=4850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.602 filename2: (groupid=0, jobs=1): err= 0: pid=3696669: Mon Oct 14 14:49:40 2024 00:36:01.602 read: IOPS=480, BW=1920KiB/s (1966kB/s)(18.8MiB/10033msec) 00:36:01.602 slat (usec): min=4, max=141, avg=34.23, stdev=23.89 00:36:01.602 clat (usec): min=25145, max=86907, avg=32988.56, stdev=3530.50 00:36:01.602 lat (usec): min=25155, max=86917, avg=33022.80, stdev=3529.19 00:36:01.602 clat percentiles (usec): 00:36:01.602 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:36:01.602 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:36:01.602 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:36:01.602 | 99.00th=[35914], 99.50th=[55313], 99.90th=[86508], 99.95th=[86508], 00:36:01.602 | 99.99th=[86508] 00:36:01.602 bw ( KiB/s): min= 1664, max= 2048, per=4.10%, avg=1919.15, stdev=92.51, samples=20 00:36:01.602 iops : min= 416, max= 512, avg=479.75, stdev=23.08, samples=20 00:36:01.602 lat (msec) : 50=99.34%, 100=0.66% 00:36:01.602 cpu : usr=98.98%, sys=0.68%, ctx=50, majf=0, minf=38 00:36:01.602 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:01.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.602 filename2: (groupid=0, jobs=1): err= 0: pid=3696670: Mon Oct 14 14:49:40 2024 00:36:01.602 read: IOPS=485, BW=1940KiB/s (1987kB/s)(19.0MiB/10032msec) 00:36:01.602 slat (usec): min=5, max=159, avg=28.72, stdev=23.56 00:36:01.602 clat (usec): min=12442, max=78255, avg=32714.46, stdev=5190.80 00:36:01.602 lat (usec): min=12455, max=78264, avg=32743.18, stdev=5190.93 00:36:01.602 clat percentiles (usec): 00:36:01.602 | 1.00th=[18744], 5.00th=[25035], 10.00th=[28705], 20.00th=[31851], 00:36:01.602 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:01.602 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[40633], 00:36:01.602 | 99.00th=[50594], 99.50th=[54789], 99.90th=[78119], 99.95th=[78119], 00:36:01.602 | 99.99th=[78119] 00:36:01.602 bw ( KiB/s): min= 1808, max= 2048, per=4.15%, avg=1941.70, stdev=63.21, samples=20 00:36:01.602 iops : min= 452, max= 512, avg=485.35, stdev=15.79, samples=20 00:36:01.602 lat (msec) : 20=1.36%, 50=97.25%, 100=1.40% 00:36:01.602 cpu : usr=99.05%, sys=0.64%, ctx=13, majf=0, minf=34 00:36:01.602 IO depths : 1=3.4%, 2=7.1%, 4=16.7%, 8=62.9%, 16=9.9%, 32=0.0%, >=64=0.0% 00:36:01.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 complete : 0=0.0%, 4=92.0%, 8=3.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.602 issued rwts: total=4866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.602 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.602 filename2: (groupid=0, jobs=1): err= 0: pid=3696671: Mon Oct 14 14:49:40 2024 00:36:01.602 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.8MiB/10025msec) 00:36:01.602 slat (usec): min=5, max=193, avg=33.18, stdev=27.96 00:36:01.602 clat (usec): min=20042, max=92976, avg=32890.04, stdev=4372.16 00:36:01.602 lat (usec): min=20062, max=92999, avg=32923.21, stdev=4372.54 00:36:01.602 clat percentiles (usec): 00:36:01.602 | 1.00th=[27132], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:36:01.602 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:36:01.602 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:36:01.602 | 99.00th=[35914], 99.50th=[76022], 99.90th=[87557], 99.95th=[87557], 00:36:01.602 | 99.99th=[92799] 00:36:01.602 bw ( KiB/s): min= 1664, max= 2048, per=4.11%, avg=1921.70, stdev=103.77, samples=20 00:36:01.602 iops : min= 416, max= 512, avg=480.35, stdev=25.98, samples=20 00:36:01.602 lat (msec) : 50=99.17%, 100=0.83% 00:36:01.602 cpu : usr=99.12%, sys=0.54%, ctx=40, majf=0, minf=45 00:36:01.602 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:01.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.603 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.603 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.603 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.603 00:36:01.603 Run status group 0 (all jobs): 00:36:01.603 READ: bw=45.7MiB/s (47.9MB/s), 1920KiB/s-2119KiB/s (1966kB/s-2170kB/s), io=461MiB (483MB), run=10001-10089msec 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.603 bdev_null0 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.603 [2024-10-14 14:49:41.186413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.603 bdev_null1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:01.603 { 00:36:01.603 "params": { 00:36:01.603 "name": "Nvme$subsystem", 00:36:01.603 "trtype": "$TEST_TRANSPORT", 00:36:01.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:01.603 "adrfam": "ipv4", 00:36:01.603 "trsvcid": "$NVMF_PORT", 00:36:01.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:01.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:01.603 "hdgst": ${hdgst:-false}, 00:36:01.603 "ddgst": ${ddgst:-false} 00:36:01.603 }, 00:36:01.603 "method": "bdev_nvme_attach_controller" 00:36:01.603 } 00:36:01.603 EOF 00:36:01.603 )") 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:01.603 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:01.604 { 00:36:01.604 "params": { 00:36:01.604 "name": "Nvme$subsystem", 00:36:01.604 "trtype": "$TEST_TRANSPORT", 00:36:01.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:01.604 "adrfam": "ipv4", 00:36:01.604 "trsvcid": "$NVMF_PORT", 00:36:01.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:01.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:01.604 "hdgst": ${hdgst:-false}, 00:36:01.604 "ddgst": ${ddgst:-false} 00:36:01.604 }, 00:36:01.604 "method": "bdev_nvme_attach_controller" 00:36:01.604 } 00:36:01.604 EOF 00:36:01.604 )") 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:01.604 "params": { 00:36:01.604 "name": "Nvme0", 00:36:01.604 "trtype": "tcp", 00:36:01.604 "traddr": "10.0.0.2", 00:36:01.604 "adrfam": "ipv4", 00:36:01.604 "trsvcid": "4420", 00:36:01.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:01.604 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:01.604 "hdgst": false, 00:36:01.604 "ddgst": false 00:36:01.604 }, 00:36:01.604 "method": "bdev_nvme_attach_controller" 00:36:01.604 },{ 00:36:01.604 "params": { 00:36:01.604 "name": "Nvme1", 00:36:01.604 "trtype": "tcp", 00:36:01.604 "traddr": "10.0.0.2", 00:36:01.604 "adrfam": "ipv4", 00:36:01.604 "trsvcid": "4420", 00:36:01.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:01.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:01.604 "hdgst": false, 00:36:01.604 "ddgst": false 00:36:01.604 }, 00:36:01.604 "method": "bdev_nvme_attach_controller" 00:36:01.604 }' 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:01.604 14:49:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:01.604 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:01.604 ... 00:36:01.604 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:01.604 ... 00:36:01.604 fio-3.35 00:36:01.604 Starting 4 threads 00:36:06.895 00:36:06.895 filename0: (groupid=0, jobs=1): err= 0: pid=3699118: Mon Oct 14 14:49:47 2024 00:36:06.895 read: IOPS=2114, BW=16.5MiB/s (17.3MB/s)(82.6MiB/5002msec) 00:36:06.895 slat (nsec): min=5655, max=52338, avg=9701.24, stdev=3964.86 00:36:06.895 clat (usec): min=2268, max=6711, avg=3759.52, stdev=254.11 00:36:06.895 lat (usec): min=2274, max=6720, avg=3769.23, stdev=254.23 00:36:06.895 clat percentiles (usec): 00:36:06.895 | 1.00th=[ 3130], 5.00th=[ 3392], 10.00th=[ 3523], 20.00th=[ 3589], 00:36:06.895 | 30.00th=[ 3720], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3785], 00:36:06.895 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4080], 95.00th=[ 4146], 00:36:06.895 | 99.00th=[ 4490], 99.50th=[ 4948], 99.90th=[ 5735], 99.95th=[ 5932], 00:36:06.895 | 99.99th=[ 6718] 00:36:06.895 bw ( KiB/s): min=16592, max=17344, per=25.08%, avg=16839.11, stdev=238.28, samples=9 00:36:06.895 iops : min= 2074, max= 2168, avg=2104.89, stdev=29.78, samples=9 00:36:06.895 lat (msec) : 4=87.58%, 10=12.42% 00:36:06.895 cpu : usr=95.38%, sys=3.58%, ctx=150, majf=0, minf=41 00:36:06.895 IO depths : 1=0.1%, 2=0.1%, 4=67.3%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.895 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.895 issued rwts: total=10578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.895 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:06.895 filename0: (groupid=0, jobs=1): err= 0: pid=3699119: Mon Oct 14 14:49:47 2024 00:36:06.895 read: IOPS=2128, BW=16.6MiB/s (17.4MB/s)(83.2MiB/5002msec) 00:36:06.895 slat (nsec): min=5641, max=68533, avg=6915.17, stdev=2844.58 00:36:06.895 clat (usec): min=1714, max=6745, avg=3739.06, stdev=513.68 00:36:06.895 lat (usec): min=1721, max=6751, avg=3745.98, stdev=513.49 00:36:06.895 clat percentiles (usec): 00:36:06.895 | 1.00th=[ 2769], 5.00th=[ 3064], 10.00th=[ 3163], 20.00th=[ 3425], 00:36:06.895 | 30.00th=[ 3589], 40.00th=[ 3720], 50.00th=[ 3752], 60.00th=[ 3785], 00:36:06.895 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4113], 95.00th=[ 5014], 00:36:06.895 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 6128], 99.95th=[ 6194], 00:36:06.895 | 99.99th=[ 6718] 00:36:06.895 bw ( KiB/s): min=16448, max=17442, per=25.47%, avg=17100.67, stdev=357.90, samples=9 00:36:06.895 iops : min= 2056, max= 2180, avg=2137.56, stdev=44.71, samples=9 00:36:06.895 lat (msec) : 2=0.04%, 4=88.03%, 10=11.94% 00:36:06.895 cpu : usr=97.20%, sys=2.54%, ctx=6, majf=0, minf=72 00:36:06.895 IO depths : 1=0.1%, 2=0.5%, 4=71.5%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.895 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.895 issued rwts: total=10648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.895 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:06.895 filename1: (groupid=0, jobs=1): err= 0: pid=3699120: Mon Oct 14 14:49:47 2024 00:36:06.895 read: IOPS=2055, BW=16.1MiB/s (16.8MB/s)(80.3MiB/5003msec) 00:36:06.895 slat (nsec): min=5706, max=54078, avg=9006.48, stdev=3656.10 00:36:06.895 clat (usec): min=1940, max=47647, avg=3866.85, stdev=1303.79 00:36:06.895 lat (usec): min=1949, max=47680, avg=3875.86, stdev=1303.88 00:36:06.895 clat percentiles (usec): 00:36:06.895 | 1.00th=[ 2999], 5.00th=[ 3359], 10.00th=[ 3490], 20.00th=[ 3621], 00:36:06.895 | 30.00th=[ 3720], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3785], 00:36:06.895 | 70.00th=[ 3818], 80.00th=[ 3851], 90.00th=[ 4113], 95.00th=[ 5145], 00:36:06.895 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 6325], 99.95th=[47449], 00:36:06.895 | 99.99th=[47449] 00:36:06.895 bw ( KiB/s): min=15152, max=16848, per=24.47%, avg=16428.44, stdev=540.86, samples=9 00:36:06.895 iops : min= 1894, max= 2106, avg=2053.56, stdev=67.61, samples=9 00:36:06.895 lat (msec) : 2=0.03%, 4=83.33%, 10=16.56%, 50=0.08% 00:36:06.895 cpu : usr=97.02%, sys=2.70%, ctx=7, majf=0, minf=27 00:36:06.895 IO depths : 1=0.1%, 2=0.1%, 4=72.7%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.895 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.895 issued rwts: total=10283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.895 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:06.895 filename1: (groupid=0, jobs=1): err= 0: pid=3699121: Mon Oct 14 14:49:47 2024 00:36:06.895 read: IOPS=2095, BW=16.4MiB/s (17.2MB/s)(81.9MiB/5002msec) 00:36:06.895 slat (nsec): min=5638, max=66707, avg=8596.42, stdev=3078.87 00:36:06.895 clat (usec): min=1242, max=6263, avg=3793.23, stdev=412.63 00:36:06.895 lat (usec): min=1248, max=6269, avg=3801.82, stdev=412.44 00:36:06.895 clat percentiles (usec): 00:36:06.895 | 1.00th=[ 2868], 5.00th=[ 3294], 10.00th=[ 3490], 20.00th=[ 3589], 00:36:06.895 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3785], 60.00th=[ 3785], 00:36:06.895 | 70.00th=[ 3818], 80.00th=[ 3851], 90.00th=[ 4113], 95.00th=[ 4293], 00:36:06.895 | 99.00th=[ 5538], 99.50th=[ 5800], 99.90th=[ 6128], 99.95th=[ 6259], 00:36:06.895 | 99.99th=[ 6259] 00:36:06.895 bw ( KiB/s): min=16544, max=16880, per=24.97%, avg=16764.56, stdev=115.36, samples=9 00:36:06.895 iops : min= 2068, max= 2110, avg=2095.56, stdev=14.41, samples=9 00:36:06.895 lat (msec) : 2=0.10%, 4=84.95%, 10=14.95% 00:36:06.895 cpu : usr=97.60%, sys=2.12%, ctx=5, majf=0, minf=40 00:36:06.895 IO depths : 1=0.1%, 2=0.2%, 4=73.8%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.895 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.895 issued rwts: total=10481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.895 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:06.895 00:36:06.895 Run status group 0 (all jobs): 00:36:06.895 READ: bw=65.6MiB/s (68.8MB/s), 16.1MiB/s-16.6MiB/s (16.8MB/s-17.4MB/s), io=328MiB (344MB), run=5002-5003msec 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.895 00:36:06.895 real 0m24.514s 00:36:06.895 user 5m15.187s 00:36:06.895 sys 0m4.213s 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:06.895 14:49:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.895 ************************************ 00:36:06.895 END TEST fio_dif_rand_params 00:36:06.895 ************************************ 00:36:06.895 14:49:47 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:06.895 14:49:47 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:06.895 14:49:47 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:06.895 14:49:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:07.191 ************************************ 00:36:07.191 START TEST fio_dif_digest 00:36:07.191 ************************************ 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:07.191 bdev_null0 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:07.191 [2024-10-14 14:49:47.671995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:36:07.191 { 00:36:07.191 "params": { 00:36:07.191 "name": "Nvme$subsystem", 00:36:07.191 "trtype": "$TEST_TRANSPORT", 00:36:07.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:07.191 "adrfam": "ipv4", 00:36:07.191 "trsvcid": "$NVMF_PORT", 00:36:07.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:07.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:07.191 "hdgst": ${hdgst:-false}, 00:36:07.191 "ddgst": ${ddgst:-false} 00:36:07.191 }, 00:36:07.191 "method": "bdev_nvme_attach_controller" 00:36:07.191 } 00:36:07.191 EOF 00:36:07.191 )") 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:36:07.191 "params": { 00:36:07.191 "name": "Nvme0", 00:36:07.191 "trtype": "tcp", 00:36:07.191 "traddr": "10.0.0.2", 00:36:07.191 "adrfam": "ipv4", 00:36:07.191 "trsvcid": "4420", 00:36:07.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:07.191 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:07.191 "hdgst": true, 00:36:07.191 "ddgst": true 00:36:07.191 }, 00:36:07.191 "method": "bdev_nvme_attach_controller" 00:36:07.191 }' 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:07.191 14:49:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.493 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:07.493 ... 00:36:07.493 fio-3.35 00:36:07.493 Starting 3 threads 00:36:19.751 00:36:19.751 filename0: (groupid=0, jobs=1): err= 0: pid=3700367: Mon Oct 14 14:49:58 2024 00:36:19.751 read: IOPS=244, BW=30.6MiB/s (32.1MB/s)(307MiB/10047msec) 00:36:19.751 slat (nsec): min=5883, max=31193, avg=7706.11, stdev=1450.66 00:36:19.751 clat (usec): min=7836, max=55067, avg=12229.88, stdev=3198.40 00:36:19.751 lat (usec): min=7843, max=55074, avg=12237.58, stdev=3198.59 00:36:19.751 clat percentiles (usec): 00:36:19.751 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10159], 00:36:19.751 | 30.00th=[10814], 40.00th=[11731], 50.00th=[12256], 60.00th=[12780], 00:36:19.751 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14091], 95.00th=[14746], 00:36:19.751 | 99.00th=[16188], 99.50th=[17171], 99.90th=[54264], 99.95th=[54789], 00:36:19.751 | 99.99th=[55313] 00:36:19.751 bw ( KiB/s): min=27648, max=34048, per=38.01%, avg=31333.90, stdev=1897.22, samples=20 00:36:19.751 iops : min= 216, max= 266, avg=244.75, stdev=14.89, samples=20 00:36:19.751 lat (msec) : 10=16.35%, 20=83.20%, 50=0.08%, 100=0.37% 00:36:19.751 cpu : usr=95.13%, sys=4.64%, ctx=20, majf=0, minf=129 00:36:19.751 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.751 issued rwts: total=2459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.751 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:19.751 filename0: (groupid=0, jobs=1): err= 0: pid=3700368: Mon Oct 14 14:49:58 2024 00:36:19.751 read: IOPS=148, BW=18.6MiB/s (19.5MB/s)(187MiB/10047msec) 00:36:19.751 slat (nsec): min=5953, max=34923, avg=8141.83, stdev=1908.43 00:36:19.751 clat (usec): min=9303, max=96369, avg=20121.77, stdev=15384.80 00:36:19.751 lat (usec): min=9314, max=96378, avg=20129.91, stdev=15384.86 00:36:19.751 clat percentiles (usec): 00:36:19.751 | 1.00th=[11076], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:36:19.751 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14484], 00:36:19.751 | 70.00th=[14877], 80.00th=[15533], 90.00th=[53740], 95.00th=[54789], 00:36:19.751 | 99.00th=[56886], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 00:36:19.751 | 99.99th=[95945] 00:36:19.751 bw ( KiB/s): min=14080, max=25344, per=23.18%, avg=19110.40, stdev=2877.52, samples=20 00:36:19.751 iops : min= 110, max= 198, avg=149.30, stdev=22.48, samples=20 00:36:19.751 lat (msec) : 10=0.33%, 20=85.02%, 50=0.20%, 100=14.45% 00:36:19.751 cpu : usr=95.51%, sys=4.26%, ctx=18, majf=0, minf=82 00:36:19.751 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.751 issued rwts: total=1495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.751 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:19.751 filename0: (groupid=0, jobs=1): err= 0: pid=3700369: Mon Oct 14 14:49:58 2024 00:36:19.751 read: IOPS=251, BW=31.4MiB/s (33.0MB/s)(315MiB/10008msec) 00:36:19.751 slat (nsec): min=5899, max=31644, avg=6756.42, stdev=750.07 00:36:19.751 clat (usec): min=5755, max=53588, avg=11923.08, stdev=2763.58 00:36:19.751 lat (usec): min=5762, max=53594, avg=11929.84, stdev=2763.80 00:36:19.751 clat percentiles (usec): 00:36:19.751 | 1.00th=[ 7570], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9765], 00:36:19.751 | 30.00th=[10290], 40.00th=[11469], 50.00th=[12256], 60.00th=[12780], 00:36:19.751 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14091], 95.00th=[14615], 00:36:19.751 | 99.00th=[15533], 99.50th=[15926], 99.90th=[53216], 99.95th=[53216], 00:36:19.751 | 99.99th=[53740] 00:36:19.751 bw ( KiB/s): min=27904, max=34304, per=39.04%, avg=32179.20, stdev=1527.22, samples=20 00:36:19.751 iops : min= 218, max= 268, avg=251.40, stdev=11.93, samples=20 00:36:19.751 lat (msec) : 10=25.56%, 20=74.21%, 100=0.24% 00:36:19.751 cpu : usr=96.36%, sys=3.43%, ctx=12, majf=0, minf=169 00:36:19.751 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.751 issued rwts: total=2516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.751 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:19.751 00:36:19.751 Run status group 0 (all jobs): 00:36:19.751 READ: bw=80.5MiB/s (84.4MB/s), 18.6MiB/s-31.4MiB/s (19.5MB/s-33.0MB/s), io=809MiB (848MB), run=10008-10047msec 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:19.751 00:36:19.751 real 0m11.180s 00:36:19.751 user 0m40.540s 00:36:19.751 sys 0m1.572s 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:19.751 14:49:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.751 ************************************ 00:36:19.751 END TEST fio_dif_digest 00:36:19.751 ************************************ 00:36:19.751 14:49:58 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:19.751 14:49:58 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:19.751 14:49:58 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:19.751 14:49:58 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:19.751 14:49:58 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:19.751 14:49:58 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:19.751 14:49:58 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:19.751 14:49:58 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:19.751 rmmod nvme_tcp 00:36:19.751 rmmod nvme_fabrics 00:36:19.751 rmmod nvme_keyring 00:36:19.751 14:49:58 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:19.751 14:49:58 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:19.751 14:49:58 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:19.751 14:49:58 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 3690211 ']' 00:36:19.751 14:49:58 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 3690211 00:36:19.751 14:49:58 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3690211 ']' 00:36:19.751 14:49:58 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3690211 00:36:19.751 14:49:58 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:36:19.751 14:49:58 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:19.751 14:49:58 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3690211 00:36:19.751 14:49:58 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:19.751 14:49:58 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:19.751 14:49:58 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3690211' 00:36:19.751 killing process with pid 3690211 00:36:19.751 14:49:58 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3690211 00:36:19.751 14:49:58 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3690211 00:36:19.751 14:49:59 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:36:19.751 14:49:59 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:22.307 Waiting for block devices as requested 00:36:22.307 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:22.307 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:22.307 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:22.307 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:22.307 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:22.307 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:22.307 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:22.569 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:22.569 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:22.569 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:22.830 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:22.830 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:22.830 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:22.830 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:23.107 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:23.107 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:23.107 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:23.374 14:50:04 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:23.374 14:50:04 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:23.374 14:50:04 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:23.374 14:50:04 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:36:23.374 14:50:04 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:23.374 14:50:04 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:36:23.374 14:50:04 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:23.374 14:50:04 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:23.374 14:50:04 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:23.374 14:50:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:23.374 14:50:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:25.922 14:50:06 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:25.922 00:36:25.922 real 1m17.991s 00:36:25.922 user 7m59.350s 00:36:25.922 sys 0m21.106s 00:36:25.922 14:50:06 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:25.922 14:50:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:25.922 ************************************ 00:36:25.922 END TEST nvmf_dif 00:36:25.922 ************************************ 00:36:25.922 14:50:06 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:25.922 14:50:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:25.922 14:50:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:25.922 14:50:06 -- common/autotest_common.sh@10 -- # set +x 00:36:25.922 ************************************ 00:36:25.922 START TEST nvmf_abort_qd_sizes 00:36:25.922 ************************************ 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:25.922 * Looking for test storage... 00:36:25.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:25.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.922 --rc genhtml_branch_coverage=1 00:36:25.922 --rc genhtml_function_coverage=1 00:36:25.922 --rc genhtml_legend=1 00:36:25.922 --rc geninfo_all_blocks=1 00:36:25.922 --rc geninfo_unexecuted_blocks=1 00:36:25.922 00:36:25.922 ' 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:25.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.922 --rc genhtml_branch_coverage=1 00:36:25.922 --rc genhtml_function_coverage=1 00:36:25.922 --rc genhtml_legend=1 00:36:25.922 --rc geninfo_all_blocks=1 00:36:25.922 --rc geninfo_unexecuted_blocks=1 00:36:25.922 00:36:25.922 ' 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:25.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.922 --rc genhtml_branch_coverage=1 00:36:25.922 --rc genhtml_function_coverage=1 00:36:25.922 --rc genhtml_legend=1 00:36:25.922 --rc geninfo_all_blocks=1 00:36:25.922 --rc geninfo_unexecuted_blocks=1 00:36:25.922 00:36:25.922 ' 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:25.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.922 --rc genhtml_branch_coverage=1 00:36:25.922 --rc genhtml_function_coverage=1 00:36:25.922 --rc genhtml_legend=1 00:36:25.922 --rc geninfo_all_blocks=1 00:36:25.922 --rc geninfo_unexecuted_blocks=1 00:36:25.922 00:36:25.922 ' 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:25.922 14:50:06 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:25.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:25.923 14:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:34.063 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:34.063 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:34.063 Found net devices under 0000:31:00.0: cvl_0_0 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:34.063 Found net devices under 0000:31:00.1: cvl_0_1 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:34.063 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:34.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:34.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:36:34.064 00:36:34.064 --- 10.0.0.2 ping statistics --- 00:36:34.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.064 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:34.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:34.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:36:34.064 00:36:34.064 --- 10.0.0.1 ping statistics --- 00:36:34.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:34.064 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:36:34.064 14:50:13 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:35.974 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:35.974 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:35.974 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:35.974 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:35.974 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:35.974 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:35.974 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:35.974 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:35.974 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:35.974 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:35.974 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:35.974 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:35.974 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:35.974 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:36.234 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:36.234 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:36.234 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=3709914 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 3709914 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3709914 ']' 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:36.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:36.495 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:36.495 [2024-10-14 14:50:17.193764] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:36:36.495 [2024-10-14 14:50:17.193810] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:36.755 [2024-10-14 14:50:17.264916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:36.755 [2024-10-14 14:50:17.302086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:36.755 [2024-10-14 14:50:17.302118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:36.756 [2024-10-14 14:50:17.302126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:36.756 [2024-10-14 14:50:17.302133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:36.756 [2024-10-14 14:50:17.302139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:36.756 [2024-10-14 14:50:17.306083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:36.756 [2024-10-14 14:50:17.306127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:36.756 [2024-10-14 14:50:17.306284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:36.756 [2024-10-14 14:50:17.306284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:36.756 14:50:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:37.017 ************************************ 00:36:37.017 START TEST spdk_target_abort 00:36:37.017 ************************************ 00:36:37.017 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:36:37.017 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:37.017 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:36:37.017 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.017 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:37.279 spdk_targetn1 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:37.279 [2024-10-14 14:50:17.812851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:37.279 [2024-10-14 14:50:17.860250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:37.279 14:50:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:37.540 [2024-10-14 14:50:18.034126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:568 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:36:37.540 [2024-10-14 14:50:18.034158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0049 p:1 m:0 dnr:0 00:36:37.540 [2024-10-14 14:50:18.065591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1704 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:36:37.540 [2024-10-14 14:50:18.065609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00d6 p:1 m:0 dnr:0 00:36:37.540 [2024-10-14 14:50:18.089605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2552 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:36:37.540 [2024-10-14 14:50:18.089623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:40.842 Initializing NVMe Controllers 00:36:40.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:40.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:40.842 Initialization complete. Launching workers. 00:36:40.842 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12269, failed: 3 00:36:40.842 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3138, failed to submit 9134 00:36:40.842 success 717, unsuccessful 2421, failed 0 00:36:40.842 14:50:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:40.842 14:50:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:40.842 [2024-10-14 14:50:21.171352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:504 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:36:40.842 [2024-10-14 14:50:21.171393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:36:40.842 [2024-10-14 14:50:21.203278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:1208 len:8 PRP1 0x200004e46000 PRP2 0x0 00:36:40.842 [2024-10-14 14:50:21.203304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:36:40.842 [2024-10-14 14:50:21.211260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:1384 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:36:40.842 [2024-10-14 14:50:21.211283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00b9 p:1 m:0 dnr:0 00:36:40.842 [2024-10-14 14:50:21.235311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:2000 len:8 PRP1 0x200004e40000 PRP2 0x0 00:36:40.842 [2024-10-14 14:50:21.235333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:00fc p:1 m:0 dnr:0 00:36:40.842 [2024-10-14 14:50:21.298087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:3384 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:36:40.842 [2024-10-14 14:50:21.298111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00aa p:0 m:0 dnr:0 00:36:41.412 [2024-10-14 14:50:22.116698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:21528 len:8 PRP1 0x200004e40000 PRP2 0x0 00:36:41.412 [2024-10-14 14:50:22.116728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:008a p:1 m:0 dnr:0 00:36:43.954 Initializing NVMe Controllers 00:36:43.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:43.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:43.954 Initialization complete. Launching workers. 00:36:43.954 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8497, failed: 6 00:36:43.954 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1203, failed to submit 7300 00:36:43.954 success 383, unsuccessful 820, failed 0 00:36:43.954 14:50:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:43.954 14:50:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:45.338 [2024-10-14 14:50:25.944257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:157872 len:8 PRP1 0x200004adc000 PRP2 0x0 00:36:45.338 [2024-10-14 14:50:25.944289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:36:45.338 [2024-10-14 14:50:26.014668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:168 nsid:1 lba:165568 len:8 PRP1 0x200004b0c000 PRP2 0x0 00:36:45.338 [2024-10-14 14:50:26.014687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:168 cdw0:0 sqhd:00c9 p:1 m:0 dnr:0 00:36:47.246 Initializing NVMe Controllers 00:36:47.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:47.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:47.246 Initialization complete. Launching workers. 00:36:47.246 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42141, failed: 2 00:36:47.246 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2627, failed to submit 39516 00:36:47.247 success 599, unsuccessful 2028, failed 0 00:36:47.247 14:50:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:47.247 14:50:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.247 14:50:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:47.247 14:50:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:47.247 14:50:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:47.247 14:50:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:47.247 14:50:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3709914 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3709914 ']' 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3709914 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3709914 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3709914' 00:36:49.163 killing process with pid 3709914 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3709914 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3709914 00:36:49.163 00:36:49.163 real 0m12.103s 00:36:49.163 user 0m47.171s 00:36:49.163 sys 0m1.837s 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:49.163 ************************************ 00:36:49.163 END TEST spdk_target_abort 00:36:49.163 ************************************ 00:36:49.163 14:50:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:49.163 14:50:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:36:49.163 14:50:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:49.163 14:50:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:49.163 ************************************ 00:36:49.163 START TEST kernel_target_abort 00:36:49.163 ************************************ 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:49.163 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:49.164 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:36:49.164 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:36:49.164 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:36:49.164 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:49.164 14:50:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:52.467 Waiting for block devices as requested 00:36:52.727 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:52.727 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:52.727 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:52.988 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:52.988 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:52.988 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:52.988 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:53.248 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:53.248 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:53.508 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:53.508 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:53.508 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:53.768 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:53.768 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:53.768 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:53.768 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:54.028 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:54.289 No valid GPT data, bailing 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:54.289 14:50:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:36:54.289 00:36:54.289 Discovery Log Number of Records 2, Generation counter 2 00:36:54.289 =====Discovery Log Entry 0====== 00:36:54.289 trtype: tcp 00:36:54.289 adrfam: ipv4 00:36:54.289 subtype: current discovery subsystem 00:36:54.289 treq: not specified, sq flow control disable supported 00:36:54.289 portid: 1 00:36:54.289 trsvcid: 4420 00:36:54.289 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:54.289 traddr: 10.0.0.1 00:36:54.289 eflags: none 00:36:54.289 sectype: none 00:36:54.289 =====Discovery Log Entry 1====== 00:36:54.289 trtype: tcp 00:36:54.289 adrfam: ipv4 00:36:54.289 subtype: nvme subsystem 00:36:54.289 treq: not specified, sq flow control disable supported 00:36:54.289 portid: 1 00:36:54.289 trsvcid: 4420 00:36:54.289 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:54.289 traddr: 10.0.0.1 00:36:54.289 eflags: none 00:36:54.289 sectype: none 00:36:54.289 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:54.549 14:50:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:57.849 Initializing NVMe Controllers 00:36:57.849 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:57.849 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:57.849 Initialization complete. Launching workers. 00:36:57.849 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67699, failed: 0 00:36:57.849 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67699, failed to submit 0 00:36:57.849 success 0, unsuccessful 67699, failed 0 00:36:57.849 14:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:57.849 14:50:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:01.147 Initializing NVMe Controllers 00:37:01.147 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:01.147 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:01.147 Initialization complete. Launching workers. 00:37:01.147 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 108319, failed: 0 00:37:01.147 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27242, failed to submit 81077 00:37:01.147 success 0, unsuccessful 27242, failed 0 00:37:01.147 14:50:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:01.147 14:50:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:03.688 Initializing NVMe Controllers 00:37:03.688 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:03.688 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:03.688 Initialization complete. Launching workers. 00:37:03.688 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101973, failed: 0 00:37:03.688 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25494, failed to submit 76479 00:37:03.688 success 0, unsuccessful 25494, failed 0 00:37:03.688 14:50:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:03.688 14:50:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:03.688 14:50:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:37:03.688 14:50:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:03.688 14:50:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:03.688 14:50:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:03.688 14:50:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:03.688 14:50:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:37:03.688 14:50:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:37:03.688 14:50:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:07.015 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:07.015 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:08.932 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:08.932 00:37:08.932 real 0m19.954s 00:37:08.932 user 0m9.678s 00:37:08.932 sys 0m6.007s 00:37:08.932 14:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:08.932 14:50:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.932 ************************************ 00:37:08.932 END TEST kernel_target_abort 00:37:08.932 ************************************ 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:09.194 rmmod nvme_tcp 00:37:09.194 rmmod nvme_fabrics 00:37:09.194 rmmod nvme_keyring 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 3709914 ']' 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 3709914 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3709914 ']' 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3709914 00:37:09.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3709914) - No such process 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3709914 is not found' 00:37:09.194 Process with pid 3709914 is not found 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:37:09.194 14:50:49 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:12.497 Waiting for block devices as requested 00:37:12.497 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:12.497 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:12.497 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:12.757 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:12.757 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:12.757 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:13.018 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:13.018 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:13.018 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:13.278 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:13.278 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:13.278 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:13.539 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:13.539 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:13.539 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:13.539 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:13.800 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:14.061 14:50:54 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:14.061 14:50:54 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:14.061 14:50:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:14.061 14:50:54 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:37:14.061 14:50:54 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:14.061 14:50:54 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:37:14.061 14:50:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:14.061 14:50:54 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:14.061 14:50:54 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:14.061 14:50:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:14.061 14:50:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:16.606 14:50:56 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:16.606 00:37:16.606 real 0m50.496s 00:37:16.606 user 1m1.794s 00:37:16.606 sys 0m18.342s 00:37:16.606 14:50:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:16.606 14:50:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:16.606 ************************************ 00:37:16.606 END TEST nvmf_abort_qd_sizes 00:37:16.606 ************************************ 00:37:16.606 14:50:56 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:16.606 14:50:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:16.606 14:50:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:16.606 14:50:56 -- common/autotest_common.sh@10 -- # set +x 00:37:16.606 ************************************ 00:37:16.606 START TEST keyring_file 00:37:16.606 ************************************ 00:37:16.606 14:50:56 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:16.606 * Looking for test storage... 00:37:16.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:16.606 14:50:56 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:16.606 14:50:56 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:37:16.606 14:50:56 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:16.606 14:50:56 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:16.606 14:50:56 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:16.606 14:50:56 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:16.606 14:50:56 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:16.606 14:50:56 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:16.606 14:50:56 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:16.606 14:50:56 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:16.606 14:50:56 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:16.606 14:50:56 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:16.606 14:50:56 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:16.606 14:50:56 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:16.606 14:50:56 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:16.606 14:50:56 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:16.606 14:50:56 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:16.607 14:50:56 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:16.607 14:50:56 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:16.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.607 --rc genhtml_branch_coverage=1 00:37:16.607 --rc genhtml_function_coverage=1 00:37:16.607 --rc genhtml_legend=1 00:37:16.607 --rc geninfo_all_blocks=1 00:37:16.607 --rc geninfo_unexecuted_blocks=1 00:37:16.607 00:37:16.607 ' 00:37:16.607 14:50:56 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:16.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.607 --rc genhtml_branch_coverage=1 00:37:16.607 --rc genhtml_function_coverage=1 00:37:16.607 --rc genhtml_legend=1 00:37:16.607 --rc geninfo_all_blocks=1 00:37:16.607 --rc geninfo_unexecuted_blocks=1 00:37:16.607 00:37:16.607 ' 00:37:16.607 14:50:56 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:16.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.607 --rc genhtml_branch_coverage=1 00:37:16.607 --rc genhtml_function_coverage=1 00:37:16.607 --rc genhtml_legend=1 00:37:16.607 --rc geninfo_all_blocks=1 00:37:16.607 --rc geninfo_unexecuted_blocks=1 00:37:16.607 00:37:16.607 ' 00:37:16.607 14:50:56 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:16.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.607 --rc genhtml_branch_coverage=1 00:37:16.607 --rc genhtml_function_coverage=1 00:37:16.607 --rc genhtml_legend=1 00:37:16.607 --rc geninfo_all_blocks=1 00:37:16.607 --rc geninfo_unexecuted_blocks=1 00:37:16.607 00:37:16.607 ' 00:37:16.607 14:50:56 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:16.607 14:50:56 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:16.607 14:50:56 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:16.607 14:50:56 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.607 14:50:56 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.607 14:50:56 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.607 14:50:56 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:16.607 14:50:56 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:16.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:16.607 14:50:56 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:16.607 14:50:56 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:16.607 14:50:56 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:16.607 14:50:56 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:16.607 14:50:56 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:16.607 14:50:56 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:16.607 14:50:56 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:16.607 14:50:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:16.607 14:50:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:16.607 14:50:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:16.607 14:50:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:16.607 14:50:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:16.607 14:50:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.n95ezVyDba 00:37:16.607 14:50:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:16.607 14:50:56 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:16.607 14:50:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.n95ezVyDba 00:37:16.607 14:50:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.n95ezVyDba 00:37:16.607 14:50:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.n95ezVyDba 00:37:16.607 14:50:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:16.607 14:50:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:16.607 14:50:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:16.607 14:50:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:16.607 14:50:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:16.607 14:50:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:16.607 14:50:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qMJIXLb7T1 00:37:16.608 14:50:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:16.608 14:50:57 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:16.608 14:50:57 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:16.608 14:50:57 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:16.608 14:50:57 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:37:16.608 14:50:57 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:16.608 14:50:57 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:16.608 14:50:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qMJIXLb7T1 00:37:16.608 14:50:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qMJIXLb7T1 00:37:16.608 14:50:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.qMJIXLb7T1 00:37:16.608 14:50:57 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:16.608 14:50:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=3720085 00:37:16.608 14:50:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3720085 00:37:16.608 14:50:57 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3720085 ']' 00:37:16.608 14:50:57 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.608 14:50:57 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:16.608 14:50:57 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.608 14:50:57 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:16.608 14:50:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.608 [2024-10-14 14:50:57.144602] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:37:16.608 [2024-10-14 14:50:57.144666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720085 ] 00:37:16.608 [2024-10-14 14:50:57.209883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.608 [2024-10-14 14:50:57.252821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:16.870 14:50:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.870 [2024-10-14 14:50:57.451351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:16.870 null0 00:37:16.870 [2024-10-14 14:50:57.483402] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:16.870 [2024-10-14 14:50:57.483794] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.870 14:50:57 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.870 [2024-10-14 14:50:57.515479] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:16.870 request: 00:37:16.870 { 00:37:16.870 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.870 "secure_channel": false, 00:37:16.870 "listen_address": { 00:37:16.870 "trtype": "tcp", 00:37:16.870 "traddr": "127.0.0.1", 00:37:16.870 "trsvcid": "4420" 00:37:16.870 }, 00:37:16.870 "method": "nvmf_subsystem_add_listener", 00:37:16.870 "req_id": 1 00:37:16.870 } 00:37:16.870 Got JSON-RPC error response 00:37:16.870 response: 00:37:16.870 { 00:37:16.870 "code": -32602, 00:37:16.870 "message": "Invalid parameters" 00:37:16.870 } 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:16.870 14:50:57 keyring_file -- keyring/file.sh@47 -- # bperfpid=3720197 00:37:16.870 14:50:57 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3720197 /var/tmp/bperf.sock 00:37:16.870 14:50:57 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3720197 ']' 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:16.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:16.870 14:50:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.870 [2024-10-14 14:50:57.572793] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:37:16.870 [2024-10-14 14:50:57.572840] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3720197 ] 00:37:17.131 [2024-10-14 14:50:57.650033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.131 [2024-10-14 14:50:57.686046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.701 14:50:58 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:17.701 14:50:58 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:17.701 14:50:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n95ezVyDba 00:37:17.701 14:50:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n95ezVyDba 00:37:17.961 14:50:58 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qMJIXLb7T1 00:37:17.961 14:50:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qMJIXLb7T1 00:37:17.961 14:50:58 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:17.961 14:50:58 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:17.961 14:50:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:17.961 14:50:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:17.961 14:50:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.223 14:50:58 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.n95ezVyDba == \/\t\m\p\/\t\m\p\.\n\9\5\e\z\V\y\D\b\a ]] 00:37:18.223 14:50:58 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:18.223 14:50:58 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:18.223 14:50:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.223 14:50:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:18.223 14:50:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.483 14:50:59 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.qMJIXLb7T1 == \/\t\m\p\/\t\m\p\.\q\M\J\I\X\L\b\7\T\1 ]] 00:37:18.483 14:50:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:18.483 14:50:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:18.483 14:50:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:18.483 14:50:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.483 14:50:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:18.483 14:50:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.744 14:50:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:18.744 14:50:59 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:18.744 14:50:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:18.744 14:50:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:18.744 14:50:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:18.744 14:50:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.744 14:50:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.744 14:50:59 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:18.744 14:50:59 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:18.744 14:50:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:19.005 [2024-10-14 14:50:59.588341] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:19.005 nvme0n1 00:37:19.005 14:50:59 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:19.005 14:50:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:19.005 14:50:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:19.005 14:50:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.005 14:50:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.005 14:50:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:19.266 14:50:59 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:19.266 14:50:59 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:19.266 14:50:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:19.266 14:50:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:19.266 14:50:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.266 14:50:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:19.266 14:50:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.527 14:51:00 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:19.527 14:51:00 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:19.527 Running I/O for 1 seconds... 00:37:20.470 17049.00 IOPS, 66.60 MiB/s 00:37:20.470 Latency(us) 00:37:20.470 [2024-10-14T12:51:01.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:20.470 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:20.470 nvme0n1 : 1.01 17056.11 66.63 0.00 0.00 7475.87 5679.79 15073.28 00:37:20.470 [2024-10-14T12:51:01.197Z] =================================================================================================================== 00:37:20.470 [2024-10-14T12:51:01.197Z] Total : 17056.11 66.63 0.00 0.00 7475.87 5679.79 15073.28 00:37:20.470 { 00:37:20.470 "results": [ 00:37:20.470 { 00:37:20.470 "job": "nvme0n1", 00:37:20.470 "core_mask": "0x2", 00:37:20.470 "workload": "randrw", 00:37:20.470 "percentage": 50, 00:37:20.470 "status": "finished", 00:37:20.470 "queue_depth": 128, 00:37:20.470 "io_size": 4096, 00:37:20.470 "runtime": 1.007088, 00:37:20.470 "iops": 17056.106318415073, 00:37:20.470 "mibps": 66.62541530630888, 00:37:20.470 "io_failed": 0, 00:37:20.470 "io_timeout": 0, 00:37:20.470 "avg_latency_us": 7475.870465545011, 00:37:20.470 "min_latency_us": 5679.786666666667, 00:37:20.470 "max_latency_us": 15073.28 00:37:20.470 } 00:37:20.470 ], 00:37:20.470 "core_count": 1 00:37:20.470 } 00:37:20.470 14:51:01 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:20.470 14:51:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:20.731 14:51:01 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:20.731 14:51:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:20.731 14:51:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:20.731 14:51:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.731 14:51:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:20.731 14:51:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.993 14:51:01 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:20.993 14:51:01 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:20.993 14:51:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:20.993 14:51:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:20.993 14:51:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:20.993 14:51:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:20.993 14:51:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:20.993 14:51:01 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:20.993 14:51:01 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:20.993 14:51:01 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:20.993 14:51:01 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:20.993 14:51:01 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:20.993 14:51:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:20.993 14:51:01 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:20.993 14:51:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:20.993 14:51:01 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:20.993 14:51:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:21.254 [2024-10-14 14:51:01.847445] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:21.254 [2024-10-14 14:51:01.848186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2214a80 (107): Transport endpoint is not connected 00:37:21.254 [2024-10-14 14:51:01.849182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2214a80 (9): Bad file descriptor 00:37:21.254 [2024-10-14 14:51:01.850183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:21.254 [2024-10-14 14:51:01.850190] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:21.254 [2024-10-14 14:51:01.850196] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:21.254 [2024-10-14 14:51:01.850204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:21.254 request: 00:37:21.254 { 00:37:21.254 "name": "nvme0", 00:37:21.254 "trtype": "tcp", 00:37:21.254 "traddr": "127.0.0.1", 00:37:21.254 "adrfam": "ipv4", 00:37:21.254 "trsvcid": "4420", 00:37:21.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:21.254 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:21.254 "prchk_reftag": false, 00:37:21.254 "prchk_guard": false, 00:37:21.254 "hdgst": false, 00:37:21.254 "ddgst": false, 00:37:21.254 "psk": "key1", 00:37:21.254 "allow_unrecognized_csi": false, 00:37:21.254 "method": "bdev_nvme_attach_controller", 00:37:21.254 "req_id": 1 00:37:21.254 } 00:37:21.254 Got JSON-RPC error response 00:37:21.254 response: 00:37:21.254 { 00:37:21.254 "code": -5, 00:37:21.254 "message": "Input/output error" 00:37:21.254 } 00:37:21.254 14:51:01 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:21.254 14:51:01 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:21.254 14:51:01 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:21.254 14:51:01 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:21.254 14:51:01 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:21.254 14:51:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:21.254 14:51:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:21.254 14:51:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:21.254 14:51:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:21.254 14:51:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.515 14:51:02 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:21.515 14:51:02 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:21.515 14:51:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:21.515 14:51:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:21.515 14:51:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:21.515 14:51:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:21.515 14:51:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.515 14:51:02 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:21.515 14:51:02 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:21.515 14:51:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:21.776 14:51:02 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:21.776 14:51:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:22.036 14:51:02 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:22.036 14:51:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.036 14:51:02 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:22.036 14:51:02 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:22.036 14:51:02 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.n95ezVyDba 00:37:22.036 14:51:02 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.n95ezVyDba 00:37:22.036 14:51:02 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:22.036 14:51:02 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.n95ezVyDba 00:37:22.036 14:51:02 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:22.036 14:51:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:22.036 14:51:02 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:22.036 14:51:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:22.036 14:51:02 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n95ezVyDba 00:37:22.036 14:51:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n95ezVyDba 00:37:22.296 [2024-10-14 14:51:02.882893] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.n95ezVyDba': 0100660 00:37:22.296 [2024-10-14 14:51:02.882911] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:22.296 request: 00:37:22.296 { 00:37:22.296 "name": "key0", 00:37:22.296 "path": "/tmp/tmp.n95ezVyDba", 00:37:22.296 "method": "keyring_file_add_key", 00:37:22.296 "req_id": 1 00:37:22.296 } 00:37:22.296 Got JSON-RPC error response 00:37:22.296 response: 00:37:22.296 { 00:37:22.296 "code": -1, 00:37:22.296 "message": "Operation not permitted" 00:37:22.296 } 00:37:22.296 14:51:02 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:22.296 14:51:02 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:22.296 14:51:02 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:22.296 14:51:02 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:22.296 14:51:02 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.n95ezVyDba 00:37:22.296 14:51:02 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n95ezVyDba 00:37:22.296 14:51:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n95ezVyDba 00:37:22.557 14:51:03 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.n95ezVyDba 00:37:22.557 14:51:03 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:22.557 14:51:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:22.557 14:51:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:22.557 14:51:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:22.557 14:51:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:22.557 14:51:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.557 14:51:03 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:22.557 14:51:03 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:22.557 14:51:03 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:22.557 14:51:03 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:22.557 14:51:03 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:22.557 14:51:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:22.557 14:51:03 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:22.557 14:51:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:22.557 14:51:03 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:22.557 14:51:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:22.817 [2024-10-14 14:51:03.376146] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.n95ezVyDba': No such file or directory 00:37:22.817 [2024-10-14 14:51:03.376158] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:22.817 [2024-10-14 14:51:03.376171] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:22.817 [2024-10-14 14:51:03.376176] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:22.817 [2024-10-14 14:51:03.376181] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:22.817 [2024-10-14 14:51:03.376186] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:22.817 request: 00:37:22.817 { 00:37:22.817 "name": "nvme0", 00:37:22.817 "trtype": "tcp", 00:37:22.817 "traddr": "127.0.0.1", 00:37:22.817 "adrfam": "ipv4", 00:37:22.817 "trsvcid": "4420", 00:37:22.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.817 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:22.817 "prchk_reftag": false, 00:37:22.817 "prchk_guard": false, 00:37:22.817 "hdgst": false, 00:37:22.817 "ddgst": false, 00:37:22.817 "psk": "key0", 00:37:22.817 "allow_unrecognized_csi": false, 00:37:22.817 "method": "bdev_nvme_attach_controller", 00:37:22.817 "req_id": 1 00:37:22.817 } 00:37:22.817 Got JSON-RPC error response 00:37:22.817 response: 00:37:22.817 { 00:37:22.817 "code": -19, 00:37:22.817 "message": "No such device" 00:37:22.817 } 00:37:22.817 14:51:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:22.817 14:51:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:22.817 14:51:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:22.818 14:51:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:22.818 14:51:03 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:22.818 14:51:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:23.078 14:51:03 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:23.078 14:51:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:23.078 14:51:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:23.078 14:51:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:23.078 14:51:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:23.078 14:51:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:23.079 14:51:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nZCL5nwWiD 00:37:23.079 14:51:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:23.079 14:51:03 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:23.079 14:51:03 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:37:23.079 14:51:03 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:23.079 14:51:03 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:23.079 14:51:03 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:37:23.079 14:51:03 keyring_file -- nvmf/common.sh@731 -- # python - 00:37:23.079 14:51:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nZCL5nwWiD 00:37:23.079 14:51:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nZCL5nwWiD 00:37:23.079 14:51:03 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.nZCL5nwWiD 00:37:23.079 14:51:03 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nZCL5nwWiD 00:37:23.079 14:51:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nZCL5nwWiD 00:37:23.079 14:51:03 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.079 14:51:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.339 nvme0n1 00:37:23.339 14:51:04 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:23.339 14:51:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:23.339 14:51:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.339 14:51:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.339 14:51:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:23.339 14:51:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.600 14:51:04 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:23.600 14:51:04 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:23.600 14:51:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:23.861 14:51:04 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:23.861 14:51:04 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:23.861 14:51:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.861 14:51:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.861 14:51:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:23.861 14:51:04 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:23.861 14:51:04 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:23.861 14:51:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:23.861 14:51:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.861 14:51:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.861 14:51:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.861 14:51:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:24.122 14:51:04 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:24.122 14:51:04 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:24.122 14:51:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:24.382 14:51:04 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:24.382 14:51:04 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:24.382 14:51:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.382 14:51:05 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:24.382 14:51:05 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nZCL5nwWiD 00:37:24.382 14:51:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nZCL5nwWiD 00:37:24.643 14:51:05 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qMJIXLb7T1 00:37:24.643 14:51:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qMJIXLb7T1 00:37:24.904 14:51:05 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:24.904 14:51:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:25.165 nvme0n1 00:37:25.165 14:51:05 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:25.165 14:51:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:25.426 14:51:05 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:25.426 "subsystems": [ 00:37:25.426 { 00:37:25.426 "subsystem": "keyring", 00:37:25.426 "config": [ 00:37:25.426 { 00:37:25.426 "method": "keyring_file_add_key", 00:37:25.426 "params": { 00:37:25.426 "name": "key0", 00:37:25.426 "path": "/tmp/tmp.nZCL5nwWiD" 00:37:25.426 } 00:37:25.426 }, 00:37:25.426 { 00:37:25.426 "method": "keyring_file_add_key", 00:37:25.426 "params": { 00:37:25.426 "name": "key1", 00:37:25.426 "path": "/tmp/tmp.qMJIXLb7T1" 00:37:25.426 } 00:37:25.426 } 00:37:25.426 ] 00:37:25.426 }, 00:37:25.426 { 00:37:25.426 "subsystem": "iobuf", 00:37:25.426 "config": [ 00:37:25.426 { 00:37:25.426 "method": "iobuf_set_options", 00:37:25.426 "params": { 00:37:25.426 "small_pool_count": 8192, 00:37:25.426 "large_pool_count": 1024, 00:37:25.426 "small_bufsize": 8192, 00:37:25.426 "large_bufsize": 135168 00:37:25.426 } 00:37:25.426 } 00:37:25.426 ] 00:37:25.426 }, 00:37:25.426 { 00:37:25.426 "subsystem": "sock", 00:37:25.426 "config": [ 00:37:25.426 { 00:37:25.426 "method": "sock_set_default_impl", 00:37:25.426 "params": { 00:37:25.426 "impl_name": "posix" 00:37:25.426 } 00:37:25.426 }, 00:37:25.426 { 00:37:25.426 "method": "sock_impl_set_options", 00:37:25.426 "params": { 00:37:25.426 "impl_name": "ssl", 00:37:25.426 "recv_buf_size": 4096, 00:37:25.426 "send_buf_size": 4096, 00:37:25.426 "enable_recv_pipe": true, 00:37:25.426 "enable_quickack": false, 00:37:25.426 "enable_placement_id": 0, 00:37:25.426 "enable_zerocopy_send_server": true, 00:37:25.426 "enable_zerocopy_send_client": false, 00:37:25.426 "zerocopy_threshold": 0, 00:37:25.426 "tls_version": 0, 00:37:25.426 "enable_ktls": false 00:37:25.426 } 00:37:25.426 }, 00:37:25.426 { 00:37:25.426 "method": "sock_impl_set_options", 00:37:25.426 "params": { 00:37:25.426 "impl_name": "posix", 00:37:25.426 "recv_buf_size": 2097152, 00:37:25.426 "send_buf_size": 2097152, 00:37:25.426 "enable_recv_pipe": true, 00:37:25.426 "enable_quickack": false, 00:37:25.426 "enable_placement_id": 0, 00:37:25.426 "enable_zerocopy_send_server": true, 00:37:25.426 "enable_zerocopy_send_client": false, 00:37:25.426 "zerocopy_threshold": 0, 00:37:25.426 "tls_version": 0, 00:37:25.426 "enable_ktls": false 00:37:25.426 } 00:37:25.426 } 00:37:25.426 ] 00:37:25.426 }, 00:37:25.426 { 00:37:25.426 "subsystem": "vmd", 00:37:25.426 "config": [] 00:37:25.426 }, 00:37:25.426 { 00:37:25.426 "subsystem": "accel", 00:37:25.426 "config": [ 00:37:25.426 { 00:37:25.426 "method": "accel_set_options", 00:37:25.426 "params": { 00:37:25.426 "small_cache_size": 128, 00:37:25.426 "large_cache_size": 16, 00:37:25.426 "task_count": 2048, 00:37:25.426 "sequence_count": 2048, 00:37:25.426 "buf_count": 2048 00:37:25.426 } 00:37:25.426 } 00:37:25.426 ] 00:37:25.426 }, 00:37:25.426 { 00:37:25.426 "subsystem": "bdev", 00:37:25.426 "config": [ 00:37:25.426 { 00:37:25.426 "method": "bdev_set_options", 00:37:25.426 "params": { 00:37:25.426 "bdev_io_pool_size": 65535, 00:37:25.426 "bdev_io_cache_size": 256, 00:37:25.426 "bdev_auto_examine": true, 00:37:25.426 "iobuf_small_cache_size": 128, 00:37:25.426 "iobuf_large_cache_size": 16 00:37:25.426 } 00:37:25.426 }, 00:37:25.426 { 00:37:25.426 "method": "bdev_raid_set_options", 00:37:25.426 "params": { 00:37:25.426 "process_window_size_kb": 1024, 00:37:25.426 "process_max_bandwidth_mb_sec": 0 00:37:25.426 } 00:37:25.426 }, 00:37:25.426 { 00:37:25.426 "method": "bdev_iscsi_set_options", 00:37:25.426 "params": { 00:37:25.426 "timeout_sec": 30 00:37:25.427 } 00:37:25.427 }, 00:37:25.427 { 00:37:25.427 "method": "bdev_nvme_set_options", 00:37:25.427 "params": { 00:37:25.427 "action_on_timeout": "none", 00:37:25.427 "timeout_us": 0, 00:37:25.427 "timeout_admin_us": 0, 00:37:25.427 "keep_alive_timeout_ms": 10000, 00:37:25.427 "arbitration_burst": 0, 00:37:25.427 "low_priority_weight": 0, 00:37:25.427 "medium_priority_weight": 0, 00:37:25.427 "high_priority_weight": 0, 00:37:25.427 "nvme_adminq_poll_period_us": 10000, 00:37:25.427 "nvme_ioq_poll_period_us": 0, 00:37:25.427 "io_queue_requests": 512, 00:37:25.427 "delay_cmd_submit": true, 00:37:25.427 "transport_retry_count": 4, 00:37:25.427 "bdev_retry_count": 3, 00:37:25.427 "transport_ack_timeout": 0, 00:37:25.427 "ctrlr_loss_timeout_sec": 0, 00:37:25.427 "reconnect_delay_sec": 0, 00:37:25.427 "fast_io_fail_timeout_sec": 0, 00:37:25.427 "disable_auto_failback": false, 00:37:25.427 "generate_uuids": false, 00:37:25.427 "transport_tos": 0, 00:37:25.427 "nvme_error_stat": false, 00:37:25.427 "rdma_srq_size": 0, 00:37:25.427 "io_path_stat": false, 00:37:25.427 "allow_accel_sequence": false, 00:37:25.427 "rdma_max_cq_size": 0, 00:37:25.427 "rdma_cm_event_timeout_ms": 0, 00:37:25.427 "dhchap_digests": [ 00:37:25.427 "sha256", 00:37:25.427 "sha384", 00:37:25.427 "sha512" 00:37:25.427 ], 00:37:25.427 "dhchap_dhgroups": [ 00:37:25.427 "null", 00:37:25.427 "ffdhe2048", 00:37:25.427 "ffdhe3072", 00:37:25.427 "ffdhe4096", 00:37:25.427 "ffdhe6144", 00:37:25.427 "ffdhe8192" 00:37:25.427 ] 00:37:25.427 } 00:37:25.427 }, 00:37:25.427 { 00:37:25.427 "method": "bdev_nvme_attach_controller", 00:37:25.427 "params": { 00:37:25.427 "name": "nvme0", 00:37:25.427 "trtype": "TCP", 00:37:25.427 "adrfam": "IPv4", 00:37:25.427 "traddr": "127.0.0.1", 00:37:25.427 "trsvcid": "4420", 00:37:25.427 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:25.427 "prchk_reftag": false, 00:37:25.427 "prchk_guard": false, 00:37:25.427 "ctrlr_loss_timeout_sec": 0, 00:37:25.427 "reconnect_delay_sec": 0, 00:37:25.427 "fast_io_fail_timeout_sec": 0, 00:37:25.427 "psk": "key0", 00:37:25.427 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:25.427 "hdgst": false, 00:37:25.427 "ddgst": false, 00:37:25.427 "multipath": "multipath" 00:37:25.427 } 00:37:25.427 }, 00:37:25.427 { 00:37:25.427 "method": "bdev_nvme_set_hotplug", 00:37:25.427 "params": { 00:37:25.427 "period_us": 100000, 00:37:25.427 "enable": false 00:37:25.427 } 00:37:25.427 }, 00:37:25.427 { 00:37:25.427 "method": "bdev_wait_for_examine" 00:37:25.427 } 00:37:25.427 ] 00:37:25.427 }, 00:37:25.427 { 00:37:25.427 "subsystem": "nbd", 00:37:25.427 "config": [] 00:37:25.427 } 00:37:25.427 ] 00:37:25.427 }' 00:37:25.427 14:51:05 keyring_file -- keyring/file.sh@115 -- # killprocess 3720197 00:37:25.427 14:51:05 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3720197 ']' 00:37:25.427 14:51:05 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3720197 00:37:25.427 14:51:05 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:25.427 14:51:05 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:25.427 14:51:05 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3720197 00:37:25.427 14:51:06 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:25.427 14:51:06 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:25.427 14:51:06 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3720197' 00:37:25.427 killing process with pid 3720197 00:37:25.427 14:51:06 keyring_file -- common/autotest_common.sh@969 -- # kill 3720197 00:37:25.427 Received shutdown signal, test time was about 1.000000 seconds 00:37:25.427 00:37:25.427 Latency(us) 00:37:25.427 [2024-10-14T12:51:06.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:25.427 [2024-10-14T12:51:06.154Z] =================================================================================================================== 00:37:25.427 [2024-10-14T12:51:06.154Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:25.427 14:51:06 keyring_file -- common/autotest_common.sh@974 -- # wait 3720197 00:37:25.427 14:51:06 keyring_file -- keyring/file.sh@118 -- # bperfpid=3721970 00:37:25.427 14:51:06 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3721970 /var/tmp/bperf.sock 00:37:25.427 14:51:06 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3721970 ']' 00:37:25.427 14:51:06 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:25.427 14:51:06 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:25.427 14:51:06 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:25.427 14:51:06 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:25.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:25.427 14:51:06 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:25.427 14:51:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:25.427 14:51:06 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:25.427 "subsystems": [ 00:37:25.427 { 00:37:25.427 "subsystem": "keyring", 00:37:25.427 "config": [ 00:37:25.427 { 00:37:25.427 "method": "keyring_file_add_key", 00:37:25.427 "params": { 00:37:25.427 "name": "key0", 00:37:25.427 "path": "/tmp/tmp.nZCL5nwWiD" 00:37:25.427 } 00:37:25.427 }, 00:37:25.427 { 00:37:25.427 "method": "keyring_file_add_key", 00:37:25.427 "params": { 00:37:25.427 "name": "key1", 00:37:25.427 "path": "/tmp/tmp.qMJIXLb7T1" 00:37:25.427 } 00:37:25.427 } 00:37:25.427 ] 00:37:25.428 }, 00:37:25.428 { 00:37:25.428 "subsystem": "iobuf", 00:37:25.428 "config": [ 00:37:25.428 { 00:37:25.428 "method": "iobuf_set_options", 00:37:25.428 "params": { 00:37:25.428 "small_pool_count": 8192, 00:37:25.428 "large_pool_count": 1024, 00:37:25.428 "small_bufsize": 8192, 00:37:25.428 "large_bufsize": 135168 00:37:25.428 } 00:37:25.428 } 00:37:25.428 ] 00:37:25.428 }, 00:37:25.428 { 00:37:25.428 "subsystem": "sock", 00:37:25.428 "config": [ 00:37:25.428 { 00:37:25.428 "method": "sock_set_default_impl", 00:37:25.428 "params": { 00:37:25.428 "impl_name": "posix" 00:37:25.428 } 00:37:25.428 }, 00:37:25.428 { 00:37:25.428 "method": "sock_impl_set_options", 00:37:25.428 "params": { 00:37:25.428 "impl_name": "ssl", 00:37:25.428 "recv_buf_size": 4096, 00:37:25.428 "send_buf_size": 4096, 00:37:25.428 "enable_recv_pipe": true, 00:37:25.428 "enable_quickack": false, 00:37:25.428 "enable_placement_id": 0, 00:37:25.428 "enable_zerocopy_send_server": true, 00:37:25.428 "enable_zerocopy_send_client": false, 00:37:25.428 "zerocopy_threshold": 0, 00:37:25.428 "tls_version": 0, 00:37:25.428 "enable_ktls": false 00:37:25.428 } 00:37:25.428 }, 00:37:25.428 { 00:37:25.428 "method": "sock_impl_set_options", 00:37:25.428 "params": { 00:37:25.428 "impl_name": "posix", 00:37:25.428 "recv_buf_size": 2097152, 00:37:25.428 "send_buf_size": 2097152, 00:37:25.428 "enable_recv_pipe": true, 00:37:25.428 "enable_quickack": false, 00:37:25.428 "enable_placement_id": 0, 00:37:25.428 "enable_zerocopy_send_server": true, 00:37:25.428 "enable_zerocopy_send_client": false, 00:37:25.428 "zerocopy_threshold": 0, 00:37:25.428 "tls_version": 0, 00:37:25.428 "enable_ktls": false 00:37:25.428 } 00:37:25.428 } 00:37:25.428 ] 00:37:25.428 }, 00:37:25.428 { 00:37:25.428 "subsystem": "vmd", 00:37:25.428 "config": [] 00:37:25.428 }, 00:37:25.428 { 00:37:25.428 "subsystem": "accel", 00:37:25.428 "config": [ 00:37:25.428 { 00:37:25.428 "method": "accel_set_options", 00:37:25.428 "params": { 00:37:25.428 "small_cache_size": 128, 00:37:25.428 "large_cache_size": 16, 00:37:25.428 "task_count": 2048, 00:37:25.428 "sequence_count": 2048, 00:37:25.428 "buf_count": 2048 00:37:25.428 } 00:37:25.428 } 00:37:25.428 ] 00:37:25.428 }, 00:37:25.428 { 00:37:25.428 "subsystem": "bdev", 00:37:25.428 "config": [ 00:37:25.428 { 00:37:25.428 "method": "bdev_set_options", 00:37:25.428 "params": { 00:37:25.428 "bdev_io_pool_size": 65535, 00:37:25.428 "bdev_io_cache_size": 256, 00:37:25.428 "bdev_auto_examine": true, 00:37:25.428 "iobuf_small_cache_size": 128, 00:37:25.428 "iobuf_large_cache_size": 16 00:37:25.428 } 00:37:25.428 }, 00:37:25.428 { 00:37:25.428 "method": "bdev_raid_set_options", 00:37:25.428 "params": { 00:37:25.428 "process_window_size_kb": 1024, 00:37:25.428 "process_max_bandwidth_mb_sec": 0 00:37:25.428 } 00:37:25.428 }, 00:37:25.428 { 00:37:25.428 "method": "bdev_iscsi_set_options", 00:37:25.428 "params": { 00:37:25.428 "timeout_sec": 30 00:37:25.428 } 00:37:25.428 }, 00:37:25.428 { 00:37:25.428 "method": "bdev_nvme_set_options", 00:37:25.428 "params": { 00:37:25.428 "action_on_timeout": "none", 00:37:25.428 "timeout_us": 0, 00:37:25.428 "timeout_admin_us": 0, 00:37:25.428 "keep_alive_timeout_ms": 10000, 00:37:25.428 "arbitration_burst": 0, 00:37:25.428 "low_priority_weight": 0, 00:37:25.428 "medium_priority_weight": 0, 00:37:25.428 "high_priority_weight": 0, 00:37:25.428 "nvme_adminq_poll_period_us": 10000, 00:37:25.428 "nvme_ioq_poll_period_us": 0, 00:37:25.428 "io_queue_requests": 512, 00:37:25.428 "delay_cmd_submit": true, 00:37:25.428 "transport_retry_count": 4, 00:37:25.428 "bdev_retry_count": 3, 00:37:25.428 "transport_ack_timeout": 0, 00:37:25.428 "ctrlr_loss_timeout_sec": 0, 00:37:25.428 "reconnect_delay_sec": 0, 00:37:25.428 "fast_io_fail_timeout_sec": 0, 00:37:25.428 "disable_auto_failback": false, 00:37:25.428 "generate_uuids": false, 00:37:25.428 "transport_tos": 0, 00:37:25.428 "nvme_error_stat": false, 00:37:25.428 "rdma_srq_size": 0, 00:37:25.428 "io_path_stat": false, 00:37:25.428 "allow_accel_sequence": false, 00:37:25.428 "rdma_max_cq_size": 0, 00:37:25.428 "rdma_cm_event_timeout_ms": 0, 00:37:25.428 "dhchap_digests": [ 00:37:25.428 "sha256", 00:37:25.428 "sha384", 00:37:25.428 "sha512" 00:37:25.428 ], 00:37:25.428 "dhchap_dhgroups": [ 00:37:25.428 "null", 00:37:25.428 "ffdhe2048", 00:37:25.428 "ffdhe3072", 00:37:25.428 "ffdhe4096", 00:37:25.428 "ffdhe6144", 00:37:25.428 "ffdhe8192" 00:37:25.428 ] 00:37:25.428 } 00:37:25.428 }, 00:37:25.428 { 00:37:25.428 "method": "bdev_nvme_attach_controller", 00:37:25.428 "params": { 00:37:25.428 "name": "nvme0", 00:37:25.428 "trtype": "TCP", 00:37:25.428 "adrfam": "IPv4", 00:37:25.428 "traddr": "127.0.0.1", 00:37:25.428 "trsvcid": "4420", 00:37:25.428 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:25.428 "prchk_reftag": false, 00:37:25.428 "prchk_guard": false, 00:37:25.428 "ctrlr_loss_timeout_sec": 0, 00:37:25.428 "reconnect_delay_sec": 0, 00:37:25.428 "fast_io_fail_timeout_sec": 0, 00:37:25.428 "psk": "key0", 00:37:25.428 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:25.428 "hdgst": false, 00:37:25.428 "ddgst": false, 00:37:25.428 "multipath": "multipath" 00:37:25.428 } 00:37:25.428 }, 00:37:25.428 { 00:37:25.428 "method": "bdev_nvme_set_hotplug", 00:37:25.428 "params": { 00:37:25.428 "period_us": 100000, 00:37:25.428 "enable": false 00:37:25.428 } 00:37:25.428 }, 00:37:25.428 { 00:37:25.428 "method": "bdev_wait_for_examine" 00:37:25.428 } 00:37:25.428 ] 00:37:25.428 }, 00:37:25.428 { 00:37:25.428 "subsystem": "nbd", 00:37:25.428 "config": [] 00:37:25.428 } 00:37:25.428 ] 00:37:25.428 }' 00:37:25.428 [2024-10-14 14:51:06.148030] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:37:25.428 [2024-10-14 14:51:06.148092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721970 ] 00:37:25.689 [2024-10-14 14:51:06.224409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.689 [2024-10-14 14:51:06.253514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:25.689 [2024-10-14 14:51:06.396790] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:26.261 14:51:06 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:26.261 14:51:06 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:37:26.261 14:51:06 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:26.261 14:51:06 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:26.261 14:51:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.522 14:51:07 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:26.522 14:51:07 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:26.522 14:51:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:26.522 14:51:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:26.522 14:51:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:26.522 14:51:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:26.522 14:51:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.783 14:51:07 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:26.783 14:51:07 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:26.783 14:51:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:26.783 14:51:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:26.783 14:51:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:26.783 14:51:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:26.783 14:51:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:26.783 14:51:07 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:26.783 14:51:07 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:26.783 14:51:07 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:26.783 14:51:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:27.045 14:51:07 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:27.045 14:51:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:27.045 14:51:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.nZCL5nwWiD /tmp/tmp.qMJIXLb7T1 00:37:27.045 14:51:07 keyring_file -- keyring/file.sh@20 -- # killprocess 3721970 00:37:27.045 14:51:07 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3721970 ']' 00:37:27.045 14:51:07 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3721970 00:37:27.045 14:51:07 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:27.045 14:51:07 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:27.045 14:51:07 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3721970 00:37:27.045 14:51:07 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:27.045 14:51:07 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:27.045 14:51:07 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3721970' 00:37:27.045 killing process with pid 3721970 00:37:27.045 14:51:07 keyring_file -- common/autotest_common.sh@969 -- # kill 3721970 00:37:27.045 Received shutdown signal, test time was about 1.000000 seconds 00:37:27.045 00:37:27.045 Latency(us) 00:37:27.045 [2024-10-14T12:51:07.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.045 [2024-10-14T12:51:07.772Z] =================================================================================================================== 00:37:27.045 [2024-10-14T12:51:07.772Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:27.045 14:51:07 keyring_file -- common/autotest_common.sh@974 -- # wait 3721970 00:37:27.306 14:51:07 keyring_file -- keyring/file.sh@21 -- # killprocess 3720085 00:37:27.306 14:51:07 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3720085 ']' 00:37:27.306 14:51:07 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3720085 00:37:27.306 14:51:07 keyring_file -- common/autotest_common.sh@955 -- # uname 00:37:27.306 14:51:07 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:27.306 14:51:07 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3720085 00:37:27.306 14:51:07 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:27.306 14:51:07 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:27.306 14:51:07 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3720085' 00:37:27.306 killing process with pid 3720085 00:37:27.306 14:51:07 keyring_file -- common/autotest_common.sh@969 -- # kill 3720085 00:37:27.306 14:51:07 keyring_file -- common/autotest_common.sh@974 -- # wait 3720085 00:37:27.567 00:37:27.567 real 0m11.290s 00:37:27.567 user 0m27.855s 00:37:27.567 sys 0m2.524s 00:37:27.567 14:51:08 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:27.567 14:51:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:27.567 ************************************ 00:37:27.567 END TEST keyring_file 00:37:27.567 ************************************ 00:37:27.567 14:51:08 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:37:27.567 14:51:08 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:27.567 14:51:08 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:27.567 14:51:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:27.567 14:51:08 -- common/autotest_common.sh@10 -- # set +x 00:37:27.567 ************************************ 00:37:27.567 START TEST keyring_linux 00:37:27.567 ************************************ 00:37:27.567 14:51:08 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:27.567 Joined session keyring: 56549582 00:37:27.567 * Looking for test storage... 00:37:27.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:27.567 14:51:08 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:27.567 14:51:08 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:37:27.567 14:51:08 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:27.830 14:51:08 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:27.830 14:51:08 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:27.830 14:51:08 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:27.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.830 --rc genhtml_branch_coverage=1 00:37:27.830 --rc genhtml_function_coverage=1 00:37:27.830 --rc genhtml_legend=1 00:37:27.830 --rc geninfo_all_blocks=1 00:37:27.830 --rc geninfo_unexecuted_blocks=1 00:37:27.830 00:37:27.830 ' 00:37:27.830 14:51:08 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:27.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.830 --rc genhtml_branch_coverage=1 00:37:27.830 --rc genhtml_function_coverage=1 00:37:27.830 --rc genhtml_legend=1 00:37:27.830 --rc geninfo_all_blocks=1 00:37:27.830 --rc geninfo_unexecuted_blocks=1 00:37:27.830 00:37:27.830 ' 00:37:27.830 14:51:08 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:27.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.830 --rc genhtml_branch_coverage=1 00:37:27.830 --rc genhtml_function_coverage=1 00:37:27.830 --rc genhtml_legend=1 00:37:27.830 --rc geninfo_all_blocks=1 00:37:27.830 --rc geninfo_unexecuted_blocks=1 00:37:27.830 00:37:27.830 ' 00:37:27.830 14:51:08 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:27.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:27.830 --rc genhtml_branch_coverage=1 00:37:27.830 --rc genhtml_function_coverage=1 00:37:27.830 --rc genhtml_legend=1 00:37:27.830 --rc geninfo_all_blocks=1 00:37:27.830 --rc geninfo_unexecuted_blocks=1 00:37:27.830 00:37:27.830 ' 00:37:27.830 14:51:08 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:27.830 14:51:08 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:27.830 14:51:08 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:27.830 14:51:08 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:27.831 14:51:08 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:27.831 14:51:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.831 14:51:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.831 14:51:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.831 14:51:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:27.831 14:51:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:27.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:27.831 14:51:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:27.831 14:51:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:27.831 14:51:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:27.831 14:51:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:27.831 14:51:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:27.831 14:51:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@731 -- # python - 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:27.831 /tmp/:spdk-test:key0 00:37:27.831 14:51:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:37:27.831 14:51:08 keyring_linux -- nvmf/common.sh@731 -- # python - 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:27.831 14:51:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:27.831 /tmp/:spdk-test:key1 00:37:27.831 14:51:08 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:27.831 14:51:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3722634 00:37:27.831 14:51:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3722634 00:37:27.831 14:51:08 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3722634 ']' 00:37:27.831 14:51:08 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:27.831 14:51:08 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:27.831 14:51:08 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:27.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:27.831 14:51:08 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:27.831 14:51:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:27.831 [2024-10-14 14:51:08.501630] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:37:27.831 [2024-10-14 14:51:08.501673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722634 ] 00:37:27.831 [2024-10-14 14:51:08.554591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.093 [2024-10-14 14:51:08.590823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.093 14:51:08 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:28.093 14:51:08 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:37:28.093 14:51:08 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:28.093 14:51:08 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.093 14:51:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:28.093 [2024-10-14 14:51:08.793565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:28.093 null0 00:37:28.354 [2024-10-14 14:51:08.825619] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:28.354 [2024-10-14 14:51:08.826003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:28.354 14:51:08 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.354 14:51:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:28.354 310758096 00:37:28.354 14:51:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:28.354 157330285 00:37:28.354 14:51:08 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:28.354 14:51:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3722752 00:37:28.354 14:51:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3722752 /var/tmp/bperf.sock 00:37:28.354 14:51:08 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3722752 ']' 00:37:28.354 14:51:08 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:28.354 14:51:08 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:28.354 14:51:08 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:28.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:28.354 14:51:08 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:28.354 14:51:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:28.354 [2024-10-14 14:51:08.903630] Starting SPDK v25.01-pre git sha1 118c273ab / DPDK 24.03.0 initialization... 00:37:28.354 [2024-10-14 14:51:08.903680] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3722752 ] 00:37:28.354 [2024-10-14 14:51:08.977555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.354 [2024-10-14 14:51:09.007346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:28.926 14:51:09 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:28.926 14:51:09 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:37:28.926 14:51:09 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:28.926 14:51:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:29.187 14:51:09 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:29.187 14:51:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:29.449 14:51:10 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:29.449 14:51:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:29.449 [2024-10-14 14:51:10.172193] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:29.710 nvme0n1 00:37:29.710 14:51:10 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:29.710 14:51:10 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:29.710 14:51:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:29.710 14:51:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:29.710 14:51:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:29.710 14:51:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:29.972 14:51:10 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:29.972 14:51:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:29.972 14:51:10 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:29.972 14:51:10 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:29.972 14:51:10 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:29.972 14:51:10 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:29.972 14:51:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:29.972 14:51:10 keyring_linux -- keyring/linux.sh@25 -- # sn=310758096 00:37:29.972 14:51:10 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:29.972 14:51:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:29.972 14:51:10 keyring_linux -- keyring/linux.sh@26 -- # [[ 310758096 == \3\1\0\7\5\8\0\9\6 ]] 00:37:29.972 14:51:10 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 310758096 00:37:29.972 14:51:10 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:29.972 14:51:10 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:30.233 Running I/O for 1 seconds... 00:37:31.177 16850.00 IOPS, 65.82 MiB/s 00:37:31.177 Latency(us) 00:37:31.177 [2024-10-14T12:51:11.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.177 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:31.177 nvme0n1 : 1.01 16850.62 65.82 0.00 0.00 7563.17 1870.51 8792.75 00:37:31.177 [2024-10-14T12:51:11.904Z] =================================================================================================================== 00:37:31.177 [2024-10-14T12:51:11.904Z] Total : 16850.62 65.82 0.00 0.00 7563.17 1870.51 8792.75 00:37:31.177 { 00:37:31.177 "results": [ 00:37:31.177 { 00:37:31.177 "job": "nvme0n1", 00:37:31.177 "core_mask": "0x2", 00:37:31.177 "workload": "randread", 00:37:31.177 "status": "finished", 00:37:31.177 "queue_depth": 128, 00:37:31.177 "io_size": 4096, 00:37:31.177 "runtime": 1.007619, 00:37:31.177 "iops": 16850.61516307255, 00:37:31.177 "mibps": 65.82271548075215, 00:37:31.177 "io_failed": 0, 00:37:31.177 "io_timeout": 0, 00:37:31.177 "avg_latency_us": 7563.166659991754, 00:37:31.177 "min_latency_us": 1870.5066666666667, 00:37:31.177 "max_latency_us": 8792.746666666666 00:37:31.177 } 00:37:31.177 ], 00:37:31.177 "core_count": 1 00:37:31.177 } 00:37:31.177 14:51:11 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:31.177 14:51:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:31.177 14:51:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:31.177 14:51:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:31.177 14:51:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:31.177 14:51:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:31.177 14:51:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:31.177 14:51:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.438 14:51:12 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:31.438 14:51:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:31.438 14:51:12 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:31.438 14:51:12 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:31.438 14:51:12 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:37:31.438 14:51:12 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:31.438 14:51:12 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:31.438 14:51:12 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:31.438 14:51:12 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:31.438 14:51:12 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:31.438 14:51:12 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:31.438 14:51:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:31.700 [2024-10-14 14:51:12.225891] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:31.700 [2024-10-14 14:51:12.226061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03830 (107): Transport endpoint is not connected 00:37:31.700 [2024-10-14 14:51:12.227056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb03830 (9): Bad file descriptor 00:37:31.700 [2024-10-14 14:51:12.228058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:31.700 [2024-10-14 14:51:12.228068] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:31.700 [2024-10-14 14:51:12.228073] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:31.700 [2024-10-14 14:51:12.228081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:31.700 request: 00:37:31.700 { 00:37:31.700 "name": "nvme0", 00:37:31.700 "trtype": "tcp", 00:37:31.700 "traddr": "127.0.0.1", 00:37:31.700 "adrfam": "ipv4", 00:37:31.700 "trsvcid": "4420", 00:37:31.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:31.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:31.700 "prchk_reftag": false, 00:37:31.700 "prchk_guard": false, 00:37:31.700 "hdgst": false, 00:37:31.700 "ddgst": false, 00:37:31.700 "psk": ":spdk-test:key1", 00:37:31.700 "allow_unrecognized_csi": false, 00:37:31.700 "method": "bdev_nvme_attach_controller", 00:37:31.700 "req_id": 1 00:37:31.700 } 00:37:31.700 Got JSON-RPC error response 00:37:31.700 response: 00:37:31.700 { 00:37:31.700 "code": -5, 00:37:31.700 "message": "Input/output error" 00:37:31.700 } 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@33 -- # sn=310758096 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 310758096 00:37:31.700 1 links removed 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@33 -- # sn=157330285 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 157330285 00:37:31.700 1 links removed 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3722752 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3722752 ']' 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3722752 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3722752 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3722752' 00:37:31.700 killing process with pid 3722752 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@969 -- # kill 3722752 00:37:31.700 Received shutdown signal, test time was about 1.000000 seconds 00:37:31.700 00:37:31.700 Latency(us) 00:37:31.700 [2024-10-14T12:51:12.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.700 [2024-10-14T12:51:12.427Z] =================================================================================================================== 00:37:31.700 [2024-10-14T12:51:12.427Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@974 -- # wait 3722752 00:37:31.700 14:51:12 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3722634 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3722634 ']' 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3722634 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:31.700 14:51:12 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3722634 00:37:31.961 14:51:12 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:31.961 14:51:12 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:31.961 14:51:12 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3722634' 00:37:31.961 killing process with pid 3722634 00:37:31.961 14:51:12 keyring_linux -- common/autotest_common.sh@969 -- # kill 3722634 00:37:31.961 14:51:12 keyring_linux -- common/autotest_common.sh@974 -- # wait 3722634 00:37:32.223 00:37:32.223 real 0m4.533s 00:37:32.223 user 0m8.723s 00:37:32.223 sys 0m1.291s 00:37:32.223 14:51:12 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:32.223 14:51:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:32.223 ************************************ 00:37:32.223 END TEST keyring_linux 00:37:32.223 ************************************ 00:37:32.223 14:51:12 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:37:32.223 14:51:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:32.223 14:51:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:32.223 14:51:12 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:37:32.223 14:51:12 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:37:32.223 14:51:12 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:37:32.223 14:51:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:32.223 14:51:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:32.223 14:51:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:32.223 14:51:12 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:37:32.223 14:51:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:32.223 14:51:12 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:37:32.223 14:51:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:32.223 14:51:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:32.223 14:51:12 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:37:32.223 14:51:12 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:37:32.223 14:51:12 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:37:32.223 14:51:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:32.223 14:51:12 -- common/autotest_common.sh@10 -- # set +x 00:37:32.223 14:51:12 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:37:32.223 14:51:12 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:32.223 14:51:12 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:32.223 14:51:12 -- common/autotest_common.sh@10 -- # set +x 00:37:40.367 INFO: APP EXITING 00:37:40.367 INFO: killing all VMs 00:37:40.367 INFO: killing vhost app 00:37:40.367 INFO: EXIT DONE 00:37:42.971 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:37:42.971 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:37:42.971 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:37:42.971 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:37:43.305 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:37:43.305 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:37:43.305 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:37:43.305 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:37:43.305 0000:65:00.0 (144d a80a): Already using the nvme driver 00:37:43.305 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:37:43.305 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:37:43.305 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:37:43.305 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:37:43.305 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:37:43.305 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:37:43.305 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:37:43.305 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:37:47.613 Cleaning 00:37:47.613 Removing: /var/run/dpdk/spdk0/config 00:37:47.613 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:47.613 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:47.613 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:47.613 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:47.613 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:47.613 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:47.613 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:47.613 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:47.613 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:47.613 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:47.613 Removing: /var/run/dpdk/spdk1/config 00:37:47.613 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:47.613 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:47.613 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:47.613 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:47.613 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:47.613 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:47.613 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:47.613 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:47.613 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:47.613 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:47.613 Removing: /var/run/dpdk/spdk2/config 00:37:47.613 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:47.613 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:47.613 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:47.613 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:47.613 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:47.613 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:47.613 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:47.613 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:47.613 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:47.613 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:47.613 Removing: /var/run/dpdk/spdk3/config 00:37:47.613 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:47.613 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:47.613 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:47.613 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:47.613 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:47.613 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:47.613 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:47.613 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:47.613 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:47.613 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:47.613 Removing: /var/run/dpdk/spdk4/config 00:37:47.613 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:47.613 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:47.613 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:47.613 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:47.613 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:47.613 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:47.613 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:47.613 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:47.613 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:47.613 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:47.613 Removing: /dev/shm/bdev_svc_trace.1 00:37:47.613 Removing: /dev/shm/nvmf_trace.0 00:37:47.613 Removing: /dev/shm/spdk_tgt_trace.pid3146706 00:37:47.613 Removing: /var/run/dpdk/spdk0 00:37:47.613 Removing: /var/run/dpdk/spdk1 00:37:47.613 Removing: /var/run/dpdk/spdk2 00:37:47.613 Removing: /var/run/dpdk/spdk3 00:37:47.613 Removing: /var/run/dpdk/spdk4 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3145207 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3146706 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3147355 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3148583 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3148735 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3149997 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3150012 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3150464 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3151602 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3152151 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3152521 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3152884 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3153286 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3153687 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3154040 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3154353 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3154615 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3155789 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3159659 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3160031 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3160389 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3160469 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3161102 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3161116 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3161684 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3161826 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3162185 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3162267 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3162563 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3162577 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3163157 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3163370 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3163771 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3168590 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3173816 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3186006 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3186879 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3192060 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3192541 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3197852 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3204998 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3208198 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3221083 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3232137 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3234165 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3235192 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3256376 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3261374 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3318835 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3325862 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3333170 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3340704 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3340706 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3341711 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3342717 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3343721 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3344391 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3344414 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3344729 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3344897 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3345036 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3346065 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3347072 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3348079 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3348749 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3348757 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3349088 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3350531 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3351756 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3361661 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3397877 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3403354 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3405351 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3407568 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3407809 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3408025 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3408457 00:37:47.613 Removing: /var/run/dpdk/spdk_pid3409141 00:37:47.873 Removing: /var/run/dpdk/spdk_pid3411330 00:37:47.873 Removing: /var/run/dpdk/spdk_pid3412411 00:37:47.873 Removing: /var/run/dpdk/spdk_pid3412789 00:37:47.873 Removing: /var/run/dpdk/spdk_pid3415448 00:37:47.873 Removing: /var/run/dpdk/spdk_pid3415961 00:37:47.873 Removing: /var/run/dpdk/spdk_pid3416911 00:37:47.873 Removing: /var/run/dpdk/spdk_pid3421893 00:37:47.873 Removing: /var/run/dpdk/spdk_pid3428634 00:37:47.873 Removing: /var/run/dpdk/spdk_pid3428636 00:37:47.873 Removing: /var/run/dpdk/spdk_pid3428637 00:37:47.873 Removing: /var/run/dpdk/spdk_pid3433400 00:37:47.873 Removing: /var/run/dpdk/spdk_pid3443922 00:37:47.873 Removing: /var/run/dpdk/spdk_pid3448744 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3456026 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3457632 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3459917 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3461445 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3467288 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3472359 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3481574 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3481584 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3486727 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3487060 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3487361 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3487733 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3487753 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3493501 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3494161 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3499570 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3502841 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3509353 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3516062 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3526557 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3535235 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3535237 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3558904 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3559658 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3560338 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3561024 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3562026 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3562588 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3563439 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3564129 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3569792 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3570147 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3577458 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3577633 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3584265 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3589565 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3601056 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3601848 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3607113 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3607493 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3612491 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3619501 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3622894 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3634998 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3645802 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3647810 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3648817 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3668453 00:37:47.874 Removing: /var/run/dpdk/spdk_pid3673316 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3676978 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3684304 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3684310 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3690314 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3692702 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3694983 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3696339 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3698691 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3700210 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3709980 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3710615 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3711281 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3714257 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3714768 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3715274 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3720085 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3720197 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3721970 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3722634 00:37:48.134 Removing: /var/run/dpdk/spdk_pid3722752 00:37:48.134 Clean 00:37:48.134 14:51:28 -- common/autotest_common.sh@1451 -- # return 0 00:37:48.134 14:51:28 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:37:48.134 14:51:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:48.134 14:51:28 -- common/autotest_common.sh@10 -- # set +x 00:37:48.134 14:51:28 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:37:48.134 14:51:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:48.134 14:51:28 -- common/autotest_common.sh@10 -- # set +x 00:37:48.134 14:51:28 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:48.134 14:51:28 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:48.134 14:51:28 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:48.134 14:51:28 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:37:48.134 14:51:28 -- spdk/autotest.sh@394 -- # hostname 00:37:48.395 14:51:28 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:48.395 geninfo: WARNING: invalid characters removed from testname! 00:38:14.973 14:51:53 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:16.355 14:51:56 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:18.265 14:51:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:20.175 14:52:00 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:21.555 14:52:02 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:23.463 14:52:03 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:24.844 14:52:05 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:24.844 14:52:05 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:38:24.844 14:52:05 -- common/autotest_common.sh@1691 -- $ lcov --version 00:38:24.844 14:52:05 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:38:24.844 14:52:05 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:38:24.844 14:52:05 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:38:24.844 14:52:05 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:38:24.844 14:52:05 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:38:24.844 14:52:05 -- scripts/common.sh@336 -- $ IFS=.-: 00:38:24.844 14:52:05 -- scripts/common.sh@336 -- $ read -ra ver1 00:38:24.844 14:52:05 -- scripts/common.sh@337 -- $ IFS=.-: 00:38:24.844 14:52:05 -- scripts/common.sh@337 -- $ read -ra ver2 00:38:24.844 14:52:05 -- scripts/common.sh@338 -- $ local 'op=<' 00:38:24.844 14:52:05 -- scripts/common.sh@340 -- $ ver1_l=2 00:38:24.844 14:52:05 -- scripts/common.sh@341 -- $ ver2_l=1 00:38:24.844 14:52:05 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:38:24.844 14:52:05 -- scripts/common.sh@344 -- $ case "$op" in 00:38:24.844 14:52:05 -- scripts/common.sh@345 -- $ : 1 00:38:24.844 14:52:05 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:38:24.844 14:52:05 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:24.844 14:52:05 -- scripts/common.sh@365 -- $ decimal 1 00:38:24.844 14:52:05 -- scripts/common.sh@353 -- $ local d=1 00:38:24.844 14:52:05 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:38:24.844 14:52:05 -- scripts/common.sh@355 -- $ echo 1 00:38:24.844 14:52:05 -- scripts/common.sh@365 -- $ ver1[v]=1 00:38:24.844 14:52:05 -- scripts/common.sh@366 -- $ decimal 2 00:38:24.844 14:52:05 -- scripts/common.sh@353 -- $ local d=2 00:38:24.844 14:52:05 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:38:24.844 14:52:05 -- scripts/common.sh@355 -- $ echo 2 00:38:24.844 14:52:05 -- scripts/common.sh@366 -- $ ver2[v]=2 00:38:24.844 14:52:05 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:38:24.844 14:52:05 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:38:24.844 14:52:05 -- scripts/common.sh@368 -- $ return 0 00:38:24.844 14:52:05 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:24.844 14:52:05 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:38:24.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.844 --rc genhtml_branch_coverage=1 00:38:24.844 --rc genhtml_function_coverage=1 00:38:24.844 --rc genhtml_legend=1 00:38:24.844 --rc geninfo_all_blocks=1 00:38:24.844 --rc geninfo_unexecuted_blocks=1 00:38:24.844 00:38:24.844 ' 00:38:24.844 14:52:05 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:38:24.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.844 --rc genhtml_branch_coverage=1 00:38:24.844 --rc genhtml_function_coverage=1 00:38:24.844 --rc genhtml_legend=1 00:38:24.844 --rc geninfo_all_blocks=1 00:38:24.844 --rc geninfo_unexecuted_blocks=1 00:38:24.844 00:38:24.844 ' 00:38:24.844 14:52:05 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:38:24.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.844 --rc genhtml_branch_coverage=1 00:38:24.844 --rc genhtml_function_coverage=1 00:38:24.844 --rc genhtml_legend=1 00:38:24.844 --rc geninfo_all_blocks=1 00:38:24.844 --rc geninfo_unexecuted_blocks=1 00:38:24.844 00:38:24.844 ' 00:38:24.844 14:52:05 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:38:24.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.844 --rc genhtml_branch_coverage=1 00:38:24.844 --rc genhtml_function_coverage=1 00:38:24.844 --rc genhtml_legend=1 00:38:24.844 --rc geninfo_all_blocks=1 00:38:24.844 --rc geninfo_unexecuted_blocks=1 00:38:24.844 00:38:24.844 ' 00:38:24.844 14:52:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:24.844 14:52:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:38:24.844 14:52:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:24.844 14:52:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:24.844 14:52:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:24.844 14:52:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.844 14:52:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.844 14:52:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.844 14:52:05 -- paths/export.sh@5 -- $ export PATH 00:38:24.844 14:52:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:24.844 14:52:05 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:24.844 14:52:05 -- common/autobuild_common.sh@486 -- $ date +%s 00:38:24.844 14:52:05 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728910325.XXXXXX 00:38:25.106 14:52:05 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728910325.41w62X 00:38:25.106 14:52:05 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:38:25.106 14:52:05 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:38:25.106 14:52:05 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:38:25.106 14:52:05 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:25.106 14:52:05 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:25.106 14:52:05 -- common/autobuild_common.sh@502 -- $ get_config_params 00:38:25.106 14:52:05 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:38:25.106 14:52:05 -- common/autotest_common.sh@10 -- $ set +x 00:38:25.106 14:52:05 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:38:25.106 14:52:05 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:38:25.106 14:52:05 -- pm/common@17 -- $ local monitor 00:38:25.106 14:52:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:25.106 14:52:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:25.106 14:52:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:25.106 14:52:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:25.106 14:52:05 -- pm/common@21 -- $ date +%s 00:38:25.106 14:52:05 -- pm/common@25 -- $ sleep 1 00:38:25.106 14:52:05 -- pm/common@21 -- $ date +%s 00:38:25.106 14:52:05 -- pm/common@21 -- $ date +%s 00:38:25.106 14:52:05 -- pm/common@21 -- $ date +%s 00:38:25.106 14:52:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728910325 00:38:25.106 14:52:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728910325 00:38:25.106 14:52:05 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728910325 00:38:25.106 14:52:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728910325 00:38:25.106 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728910325_collect-cpu-load.pm.log 00:38:25.106 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728910325_collect-vmstat.pm.log 00:38:25.106 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728910325_collect-cpu-temp.pm.log 00:38:25.106 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728910325_collect-bmc-pm.bmc.pm.log 00:38:26.048 14:52:06 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:38:26.048 14:52:06 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:38:26.048 14:52:06 -- spdk/autopackage.sh@14 -- $ timing_finish 00:38:26.048 14:52:06 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:26.048 14:52:06 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:26.048 14:52:06 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:26.048 14:52:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:26.048 14:52:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:26.048 14:52:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:26.048 14:52:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:26.048 14:52:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:26.048 14:52:06 -- pm/common@44 -- $ pid=3736044 00:38:26.048 14:52:06 -- pm/common@50 -- $ kill -TERM 3736044 00:38:26.048 14:52:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:26.048 14:52:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:26.048 14:52:06 -- pm/common@44 -- $ pid=3736045 00:38:26.048 14:52:06 -- pm/common@50 -- $ kill -TERM 3736045 00:38:26.048 14:52:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:26.048 14:52:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:26.048 14:52:06 -- pm/common@44 -- $ pid=3736047 00:38:26.048 14:52:06 -- pm/common@50 -- $ kill -TERM 3736047 00:38:26.048 14:52:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:26.048 14:52:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:26.048 14:52:06 -- pm/common@44 -- $ pid=3736070 00:38:26.048 14:52:06 -- pm/common@50 -- $ sudo -E kill -TERM 3736070 00:38:26.048 + [[ -n 3060419 ]] 00:38:26.048 + sudo kill 3060419 00:38:26.059 [Pipeline] } 00:38:26.075 [Pipeline] // stage 00:38:26.081 [Pipeline] } 00:38:26.095 [Pipeline] // timeout 00:38:26.101 [Pipeline] } 00:38:26.116 [Pipeline] // catchError 00:38:26.121 [Pipeline] } 00:38:26.136 [Pipeline] // wrap 00:38:26.142 [Pipeline] } 00:38:26.155 [Pipeline] // catchError 00:38:26.164 [Pipeline] stage 00:38:26.166 [Pipeline] { (Epilogue) 00:38:26.179 [Pipeline] catchError 00:38:26.181 [Pipeline] { 00:38:26.193 [Pipeline] echo 00:38:26.195 Cleanup processes 00:38:26.200 [Pipeline] sh 00:38:26.490 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:26.490 3736197 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:26.490 3736742 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:26.505 [Pipeline] sh 00:38:26.793 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:26.793 ++ grep -v 'sudo pgrep' 00:38:26.793 ++ awk '{print $1}' 00:38:26.793 + sudo kill -9 3736197 00:38:26.805 [Pipeline] sh 00:38:27.093 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:39.331 [Pipeline] sh 00:38:39.620 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:39.620 Artifacts sizes are good 00:38:39.635 [Pipeline] archiveArtifacts 00:38:39.642 Archiving artifacts 00:38:39.782 [Pipeline] sh 00:38:40.068 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:40.083 [Pipeline] cleanWs 00:38:40.094 [WS-CLEANUP] Deleting project workspace... 00:38:40.095 [WS-CLEANUP] Deferred wipeout is used... 00:38:40.102 [WS-CLEANUP] done 00:38:40.103 [Pipeline] } 00:38:40.121 [Pipeline] // catchError 00:38:40.131 [Pipeline] sh 00:38:40.418 + logger -p user.info -t JENKINS-CI 00:38:40.427 [Pipeline] } 00:38:40.441 [Pipeline] // stage 00:38:40.446 [Pipeline] } 00:38:40.459 [Pipeline] // node 00:38:40.465 [Pipeline] End of Pipeline 00:38:40.507 Finished: SUCCESS